Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

TypeScript for .NET Engineers

A comprehensive guide for senior .NET engineers transitioning to the modern TypeScript/Node.js ecosystem.


Who This Book Is For

This book is designed for senior .NET engineers — people with deep Microsoft experience (C#, ASP.NET, SQL Server, Azure DevOps, Visual Studio) — who are transitioning into a modern open-source stack built on TypeScript, Node.js, React/Vue, PostgreSQL, and GitHub-centric workflows.

This is not a beginner curriculum. It assumes strong engineering fundamentals and focuses on mapping existing knowledge to new tools, closing specific gaps, and building fluency in the ecosystem conventions that differ from the Microsoft world.

You should read this book if you are:

  • A senior engineer with 5+ years of .NET/Microsoft stack experience
  • Comfortable with strongly-typed languages, ORMs, middleware pipelines, and enterprise patterns
  • Now working on (or transitioning to) projects using: Next.js, NestJS, React, Vue 3, PostgreSQL, GitHub, CLI-first tooling

Our Stack

Every concept in this book is grounded in a specific, opinionated toolchain. We don’t survey the ecosystem — we teach our stack and explain why we chose it.

CategoryOur Tool.NET Equivalent
Frontend FrameworkReact (Next.js) / Vue 3 (Nuxt)Blazor / Razor Pages
Backend FrameworkNestJSASP.NET Core
DatabasePostgreSQLSQL Server
ORMPrisma / DrizzleEntity Framework
AuthClerkASP.NET Identity / Azure AD
HostingRenderAzure App Service / IIS
Error MonitoringSentryApplication Insights
Code QualitySonarCloudSonarQube
Security ScanningSnyk + SemgrepFortify / Veracode
AI Coding AssistantClaude Code (CLI)GitHub Copilot
Version ControlGitHub (CLI-first)Azure DevOps / TFS
Package ManagerpnpmNuGet
Type SafetyTypeScript + ZodC# type system

How This Book Is Organized

The book is organized into 9 parts with 78 chapters, plus 4 appendices:

  1. Foundation & Mental Models — Mapping .NET concepts to the JS/TS world
  2. TypeScript Deep Dive — The type system that replaces C#
  3. Frontend Frameworks — React, Vue 3, Next.js, Nuxt
  4. Backend with NestJS — The ASP.NET Core of Node.js
  5. Polyglot Architectures — When .NET or Python is the right backend
  6. Data Layer — PostgreSQL, ORMs, migrations
  7. DevOps & Toolchain — GitHub, CI/CD, Render, Docker
  8. Security & Quality — Sentry, SonarCloud, Snyk, Semgrep
  9. Workflow & Productivity — Claude Code, CLI workflows, team conventions

Recommended reading order: Work through Parts I–III in the first two weeks, then continue through the remaining parts as you begin project work. Each chapter is self-contained but builds on concepts from earlier chapters.

Design Principles

  • Respect your intelligence — you are a senior engineer, not a bootcamp student
  • Always bridge from .NET — every concept is grounded in what you already know
  • Show real code — examples from actual projects, not toy demos
  • CLI-first — we work in terminals, not GUIs
  • Opinionated — we chose this stack for reasons; we’ll explain them

Expected Outcome

After completing this book, you should be able to:

  • Stand up a full Next.js or NestJS project from scratch
  • Write idiomatic TypeScript with proper type safety across all layers
  • Work confidently with PostgreSQL and a TypeScript ORM
  • Use GitHub, Render, Sentry, and the security tools without hand-holding
  • Use Claude Code as a daily productivity tool
  • Architect polyglot systems — knowing when to keep .NET, when to add Python, and how to maintain type safety across language boundaries
  • Ship your first PR within the first week of project work

The Landscape: .NET World vs. JS/TS Ecosystem

For .NET engineers who know: The Microsoft stack — C#, ASP.NET Core, EF Core, Azure DevOps, Azure hosting You’ll learn: How the JS/TS ecosystem maps to your .NET mental model, and which specific tools this curriculum teaches and why Time: 12 min read

The .NET Way (What You Already Know)

Microsoft built a vertically integrated stack. When you start a new ASP.NET Core project, you get a curated, co-designed set of tools: C# as the language, the CLR as the runtime, ASP.NET Core as the web framework, Entity Framework Core as the ORM, and NuGet as the package manager. Visual Studio (or Rider) handles the IDE. Azure DevOps handles CI/CD. Azure handles hosting. These components are designed to work together, tested together, and documented together. Microsoft owns the whole thing.

The practical consequence of this integration is that most architectural decisions are made for you. When you need an HTTP client, you reach for HttpClient. When you need auth, you configure ASP.NET Identity or integrate Azure AD. When you need a background job, you implement IHostedService. The framework answers these questions before you ask them.

This is genuinely good. Enterprise software benefits from opinionated, coherent stacks. The consistency lowers onboarding costs, simplifies debugging, and produces predictable results.

The JS/TS world does not work this way.

The JS/TS Way

In the JavaScript and TypeScript ecosystem, every layer of the stack is a separate decision made by the team. The language (TypeScript) is maintained by Microsoft but separately from any runtime. The runtime (Node.js) is maintained by the OpenJS Foundation. The web frameworks (Next.js, NestJS, Nuxt) are maintained by separate companies or open-source collectives. The ORM is your choice from a field of viable options. The hosting platform, the CI/CD pipeline, the observability tools — all independent products with independent release cycles, independent documentation, and independent communities.

The first time a .NET engineer encounters the JS ecosystem, the typical reaction is something between confusion and alarm. npm install on a new project downloads hundreds of packages. The same problem (form validation, state management, HTTP fetching) has five popular solutions. Answers on Stack Overflow reference library versions from three major releases ago. Articles that appear authoritative recommend approaches the community quietly deprecated two years later.

This is not incompetence. It is the natural consequence of an ecosystem built in public, by thousands of contributors, without a single coordinating entity. The flexibility is real and valuable — but it comes with a cognitive tax.

This curriculum solves that by making the decisions for you. We have chosen a specific, coherent stack. You will learn that stack. We will explain why we chose each tool, and we will map every concept to something you already know from .NET.

The Stack We’ve Chosen

Here is every tool in our stack, grouped by function:

FunctionOur Tool.NET Equivalent
LanguageTypeScriptC#
RuntimeNode.js.NET CLR
Frontend framework (React)React + Next.jsRazor Pages / Blazor Server
Frontend framework (Vue)Vue 3 + NuxtBlazor / MVC Views
Backend API frameworkNestJSASP.NET Core
ORM (primary)PrismaEntity Framework Core
ORM (lightweight/SQL-first)DrizzleDapper
DatabasePostgreSQLSQL Server
Package managerpnpmNuGet
Version control + CI/CDGitHub (CLI-first)Azure DevOps
HostingRenderAzure App Service
AuthClerkASP.NET Identity / Azure AD B2C
Error trackingSentryApplication Insights
Code qualitySonarCloudSonarQube / Roslyn Analyzers
Dependency scanningSnykOWASP Dependency Check / Dependabot
Static analysisSemgrepRoslyn Analyzers / SecurityCodeScan
AI coding assistantClaude CodeGitHub Copilot
Test runnerVitestxUnit / NUnit
End-to-end testingPlaywrightSelenium / Playwright (.NET)

You will notice that some .NET equivalents are the same tools used differently (Playwright, SonarCloud, Semgrep all support both ecosystems). Others have clear analogs. A few have no good equivalent — we will call those out explicitly in the relevant articles.

Full Ecosystem Map

The following table maps every major layer of the .NET stack to its JS/TS equivalent, including both the specific tools we use and the broader field of alternatives you will encounter in existing codebases.

Runtime and Language

.NET ConceptWhat It DoesOur JS/TS ChoiceAlternatives You’ll Encounter
C#Statically typed application languageTypeScriptJavaScript (plain JS, no types)
.NET CLRExecutes IL bytecode, manages memoryNode.js (V8)Deno, Bun
.NET SDKCompiler, runtime, CLI toolsNode.js + tsc
dotnet CLIBuild, run, test, publishpnpm + npm scriptsnpm, yarn, bun
.NET versioning (6, 7, 8, 9…)Runtime versionNode.js LTS versions (20, 22…)
global.jsonPin SDK version.nvmrc / .node-version

Project Structure and Build

.NET ConceptWhat It DoesOur JS/TS ChoiceAlternatives You’ll Encounter
Solution (.sln)Groups related projectsMonorepo root with pnpm-workspace.yamlnx.json, turbo.json
Project (.csproj)Compilation unit, output assemblypackage.json
AssemblyInfo.csPackage metadatapackage.json name/version fields
MSBuildBuild orchestrationTurborepo + npm scriptsNx, Bazel, Makefile
dotnet buildCompilepnpm build (invokes tsc or framework CLI)
dotnet runRun the app locallypnpm dev
dotnet watchHot reload during developmentBuilt into Next.js / NestJS dev servernodemon
dotnet publishProduce deployment artifactpnpm build (framework-specific)
RoslynC# compilerTypeScript compiler (tsc)esbuild (transpile only)

Packages and Dependencies

.NET ConceptWhat It DoesOur JS/TS ChoiceAlternatives You’ll Encounter
NuGetPackage registrynpm registryGitHub Package Registry
NuGet clientDependency resolution and installpnpmnpm, yarn
.csproj <PackageReference>Declare dependenciespackage.json dependencies
packages.lock.jsonLock file for reproducible buildspnpm-lock.yamlpackage-lock.json, yarn.lock
NuGet cache (~/.nuget/packages)Local package cachepnpm store (~/.pnpm-store)node_modules (npm)
Project referencesOne project depends on anotherpnpm workspace dependencies
dotnet tool install -gInstall global CLI toolpnpm add -gnpm install -g

Web Framework

.NET ConceptWhat It DoesOur JS/TS ChoiceAlternatives You’ll Encounter
ASP.NET CoreFull-featured web frameworkNestJS (API) / Next.js or Nuxt (full-stack)Express, Fastify, Hono, Remix
KestrelHTTP serverNode.js http module (via Express under NestJS)Fastify
Program.cs / Startup.csApp configuration and startupmain.ts (NestJS) / next.config.ts (Next.js)
Middleware pipelineOrdered request/response processingNestJS middleware + Express middleware
IServiceCollectionService registration for DINestJS @Module() providers array
IServiceProviderResolves services at runtimeNestJS dependency injection container
ControllerHandles HTTP requestsNestJS @Controller() class
Action methodHandles a specific route+verbNestJS method with @Get(), @Post(), etc.
[Authorize] attributeAuthorization filterNestJS Guard
Action filterBefore/after action logicNestJS Interceptor
Model bindingMaps request data to parametersNestJS Pipes + @Body(), @Query(), @Param()
Data Annotations / FluentValidationRequest validationclass-validator or ZodJoi, Yup
[Route] attributeURL routing@Controller('path') + @Get(':id')
Exception middlewareGlobal error handlingNestJS Exception Filters
appsettings.jsonApp configuration.env files + @nestjs/config
IConfigurationConfiguration accessprocess.env + ConfigService
Swagger / SwashbuckleAPI documentation@nestjs/swaggerOpenAPI manually
SignalRReal-time communicationSocket.io via NestJS WebSocket gatewaynative WS

Frontend Frameworks

.NET ConceptWhat It DoesOur JS/TS ChoiceAlternatives You’ll Encounter
Razor PagesServer-rendered HTML pagesNext.js (React) or Nuxt (Vue)Remix, SvelteKit
Blazor ServerComponent-based UI (C# runs on server)React Server Components / Nuxt server components
Blazor WebAssemblyComponent-based UI (runs in browser)React SPA / Vue SPAAngular, Svelte, SolidJS
Razor syntax (@Model.Property)Template languageJSX (React) or .vue SFC templates (Vue)
ViewComponentReusable UI componentReact component / Vue component
INotifyPropertyChangedReactive data bindingReact useState / Vue ref()
Two-way binding (@bind)Sync UI and modelVue v-model / React controlled inputs
@Html.ValidationMessageForm validation displayReact Hook Form / VeeValidate + Zod

Data Layer

.NET ConceptWhat It DoesOur JS/TS ChoiceAlternatives You’ll Encounter
Entity Framework CoreORMPrismaDrizzle, TypeORM, Sequelize, Knex
DbContextDatabase session and change trackingPrisma Client
DbSet<T>Typed table accessprisma.modelName (e.g., prisma.user)
EF model classMaps to database tablePrisma schema model
LINQQuery languagePrisma Client API / Drizzle query builder
Add-MigrationCreate a migration fileprisma migrate devdrizzle-kit generate
Update-DatabaseApply migrationsprisma migrate deploydrizzle-kit push
Scaffold-DbContextGenerate models from existing DBprisma db pull (introspection)
AsNoTracking()Read-only query (no change tracking)Default in Prisma (no change tracking)
IMemoryCacheIn-process cachingnode-cache / LRU cache
IDistributedCacheDistributed cachingRedis via ioredis
SQL ServerRelational databasePostgreSQLMySQL, SQLite (dev only)
Azure Blob StorageObject/file storageCloudflare R2 / AWS S3
ADO.NETRaw database accesspg (node-postgres)

Testing

.NET ConceptWhat It DoesOur JS/TS ChoiceAlternatives You’ll Encounter
xUnit / NUnitTest frameworkVitestJest
[Fact] / [Test]Test declarationit() or test()
[Theory] with [InlineData]Data-driven testsit.each()
[SetUp] / constructorTest setupbeforeEach()
[TearDown] / DisposeTest cleanupafterEach()
MoqMocking libraryvi.mock() (built-in Vitest)Jest mocks
Selenium / Playwright (.NET)End-to-end browser testsPlaywright (TypeScript)Cypress
coverlet / dotCoverCode coverageVitest’s built-in coverage (c8/v8)Istanbul
SQL Server LocalDBTest databaseDocker + real PostgreSQLSQLite

DevOps and Tooling

.NET ConceptWhat It DoesOur JS/TS ChoiceAlternatives You’ll Encounter
Azure DevOpsCI/CD + version controlGitHub + GitHub ActionsGitLab CI, CircleCI
Azure Pipelines YAMLCI/CD pipeline definitionGitHub Actions workflow YAML
Azure App ServiceWeb app hostingRender Web ServiceVercel, Railway, Fly.io
Azure Static Web AppsStatic site hostingRender Static SiteVercel, Netlify
Azure PostgreSQLManaged databaseRender PostgreSQLNeon, Supabase, PlanetScale
Azure Redis CacheManaged RedisRender RedisUpstash
Azure Container RegistryContainer image storageDocker Hub / GitHub Container Registry
Application InsightsObservability + error trackingSentryDatadog, New Relic
Azure Key VaultSecrets managementRender environment variablesDoppler, HashiCorp Vault
.editorconfigCode style configuration.editorconfig + Prettier config
Roslyn Analyzers / StyleCopLinting and style enforcementESLint
Visual StudioIDEVS CodeWebStorm

Key Differences

The table above shows structural equivalence. These are the philosophical differences that will change how you work:

Fragmentation is the default. In .NET, you get one HTTP client, one DI container, one ORM. In the JS ecosystem, there are five competing solutions for every problem. We have made choices; you do not need to re-evaluate them. But when you read external code, you will encounter the alternatives.

There is no assembly. In .NET, a project compiles to an assembly (.dll or .exe) — a discrete, versioned artifact. In the JS/TS world, TypeScript compiles to JavaScript files that are then bundled. The output is not a versioned artifact; it is a directory of files optimized for execution or delivery. There is no GAC. There is no strong naming.

Types are erased at runtime. TypeScript’s type system is a compile-time tool. At runtime, you have JavaScript. If you receive JSON from an API and type it as User, TypeScript believes you — but the runtime has no way to verify the claim. This is fundamentally different from C#, where a User object IS a User object, enforced by the CLR. This distinction drives almost all of the patterns in Article 2.3 (Zod and end-to-end type safety).

Single-threaded execution. Node.js runs your code on a single thread. Concurrency is achieved through the event loop, not thread pools. This has implications for how you write async code, how you handle CPU-intensive work, and how you think about scaling. Article 1.2 covers this in depth.

Ecosystem velocity is higher. The Node.js ecosystem moves faster than the .NET ecosystem. Major versions, breaking changes, and paradigm shifts happen more frequently. This is a trade-off, not a flaw — but it requires active attention to stay current. This curriculum uses the 2026 versions of all tools.

No integrated IDE. Visual Studio is a full IDE — debugger, designer, profiler, test runner, NuGet browser, all integrated. VS Code is a text editor with extensions. The debugging story is more manual, the tooling is more fragmented, and the workflow is more terminal-centric. Article 8.2 covers the CLI-first workflow.

Gotchas for .NET Engineers

node_modules is not the NuGet cache. In NuGet, packages are stored once in ~/.nuget/packages and referenced by all projects. In npm, every project gets its own node_modules folder with its own copies of every dependency. This is why a new Node.js project can download 500MB of packages and why node_modules is proverbially heavy. pnpm addresses this with hard links to a central store, which is one reason we use it. You will never commit node_modules. You will add it to .gitignore before your first commit.

Semver ranges are not lockfiles. In NuGet, a <PackageReference Version="6.0.1"> installs exactly version 6.0.1. In npm, "express": "^4.18.0" means “version 4.18.0 or any compatible minor/patch release.” Without a lockfile (pnpm-lock.yaml), two engineers running pnpm install on the same package.json can get different versions. Always commit the lockfile. Always use pnpm install --frozen-lockfile in CI.

TypeScript’s strict mode is not optional. TypeScript has a strict compiler flag that enables a set of checks including strict null checking, no implicit any, and others. When strict is off, TypeScript is approximately as type-safe as writing comments — it will accept almost anything. All our projects start with "strict": true in tsconfig.json. If you encounter a TypeScript codebase where strict is off, treat it with the same suspicion you would treat a C# codebase with #pragma warning disable at the top of every file.

async/await looks the same but is not. TypeScript async/await and C# async/await use the same keywords and similar syntax. The semantics are different in ways that will produce subtle bugs if you assume they are equivalent. The most important difference: in C#, await can resume on any thread pool thread; in Node.js, everything runs on the same thread. There is no ConfigureAwait(false). There is no Task.Run for offloading to a thread pool. Article 1.7 covers this in detail.

Hands-On Exercise

This is an orientation article — there is no code to write yet. The exercise is to set up your mental model.

Take any feature in a .NET project you know well — a typical CRUD API endpoint with validation, auth, and database access. Write down the ASP.NET components involved: the controller, the service, the repository or EF context, the DTO, the validation attributes, the auth filter.

Then, using the ecosystem map above, write the equivalent list for our JS/TS stack. Which NestJS component replaces the controller? Which replaces the EF context? Where does Zod fit in?

Keep this list. By the end of Track 4, every item on it will have a concrete, working implementation you have written yourself.

If you want to go further: spend 15 minutes exploring the NestJS documentation and the Next.js documentation. Do not try to learn anything yet — just notice how the documentation is structured, what concepts appear in the navigation, and how they compare to ASP.NET Core’s documentation structure.

Quick Reference

The Toolchain at a Glance

Layer.NET ToolOur JS/TS Tool
LanguageC#TypeScript
Runtime.NET CLRNode.js
Full-stack framework (React)ASP.NET MVC + Razor PagesNext.js
Full-stack framework (Vue)ASP.NET MVC + Razor PagesNuxt
API frameworkASP.NET CoreNestJS
ORM (primary)EF CorePrisma
ORM (SQL-first)DapperDrizzle
DatabaseSQL ServerPostgreSQL
Package managerNuGet + dotnet CLIpnpm
CI/CDAzure DevOpsGitHub Actions
HostingAzure App ServiceRender
AuthASP.NET Identity / Azure ADClerk
Error trackingApplication InsightsSentry
Code qualitySonarQube + RoslynSonarCloud + ESLint
Dependency scanningDependabot / OWASPSnyk
Static analysisRoslyn AnalyzersSemgrep
AI assistantGitHub CopilotClaude Code
Unit testingxUnit / NUnitVitest
E2E testingPlaywright (.NET)Playwright (TypeScript)

Key Conceptual Shifts

If you think…Think instead…
“One framework covers everything”Each layer is a separate tool with its own release cycle
“Types are enforced at runtime”TypeScript types are erased; runtime validation requires Zod
“I can use thread pools for concurrency”Node.js is single-threaded; use async I/O, not threads
“My IDE knows everything about the build”The build pipeline is CLI-first: pnpm, tsc, prisma
“Dependencies are downloaded once globally”node_modules is per-project (pnpm mitigates this)
“Strict mode is optional”"strict": true is non-negotiable

Where to Go Next in This Curriculum

  • Article 1.2 — Node.js runtime: the single-threaded event loop explained for CLR engineers
  • Article 1.3 — pnpm and package.json: NuGet to npm migration guide
  • Article 1.7 — async/await: same syntax, different execution model
  • Article 2.1 — TypeScript’s type system compared to C#’s
  • Article 4.1 — NestJS architecture: the ASP.NET Core Rosetta Stone

Further Reading

Runtime Fundamentals: CLR vs. Node.js

For .NET engineers who know: The CLR, thread pools, JIT compilation, and async/await in C# You’ll learn: How Node.js achieves high concurrency with a single thread, and why TypeScript’s async/await is syntactically familiar but mechanically different from C#’s Time: 15-20 minutes

The most dangerous misconception a .NET engineer brings to Node.js is this: “async/await looks the same, so it works the same.” It does not. Understanding why is not academic — it directly affects how you write correct, performant TypeScript code and explains behaviors that will otherwise baffle you in production.


The .NET Way (What You Already Know)

The CLR is a multi-threaded runtime. When a request arrives at an ASP.NET Core application, Kestrel picks it up and assigns it to a thread from the thread pool. That thread executes your middleware pipeline, controller action, and any synchronous work. When it hits an await, the CLR’s SynchronizationContext or TaskScheduler releases the thread back to the pool so it can serve another request while the awaited I/O operation completes. When the I/O finishes, a thread (possibly a different one) is pulled from the pool to resume execution.

// ASP.NET Core — this runs on a thread pool thread.
// When it hits await, the thread is returned to the pool.
// A (possibly different) thread resumes when the DB call completes.
[HttpGet("{id}")]
public async Task<UserDto> GetUser(int id)
{
    // Thread is released here while DB is doing I/O
    var user = await _dbContext.Users.FindAsync(id);

    // A thread pool thread resumes here — might not be the same thread
    return _mapper.Map<UserDto>(user);
}

The CLR’s model is fundamentally preemptive multi-threading: the OS scheduler can interrupt any thread at any time and switch to another. Your code runs in parallel across multiple cores. ConfigureAwait(false) exists because the default behavior captures the SynchronizationContext to resume on the original “context” (important in ASP.NET Framework or UI apps) — in ASP.NET Core you typically add it everywhere to avoid pointless context switches.

The thread pool dynamically sizes itself based on demand. The runtime can also handle CPU-bound work by running tasks on pool threads in parallel. This is the model you’ve internalized over years of .NET development.

Node.js works nothing like this.


The Node.js Way

Single Thread, Event Loop

Node.js runs your JavaScript/TypeScript code on a single thread. There is no thread pool for your application code. There is no concept of “releasing a thread to the pool.” At any given moment, exactly one piece of your code is executing.

This is not a limitation to work around — it is the architecture. And it handles high concurrency through a different mechanism: the event loop.

The event loop is an infinite loop that continuously checks for work to do. Node.js delegates I/O operations (network calls, file reads, database queries) to the operating system or to libuv (its cross-platform async I/O library). The OS handles the actual waiting. When the I/O completes, the OS notifies libuv, libuv queues a callback on the event loop, and the event loop executes that callback when the current synchronous work is done.

graph TD
    EL["Event Loop"]
    T["timers\nsetTimeout / setInterval"]
    P["pending callbacks"]
    PO["poll (I/O events)"]
    CH["check (setImmediate)"]
    JS["Your JS code\n(single thread)"]
    LIB["libuv thread pool\n(file I/O, DNS, crypto)"]
    OS["OS async I/O\n(network sockets, epoll/kqueue)"]

    T --> P --> PO --> CH --> T
    EL --- T
    EL --- JS
    EL --- LIB
    LIB --- OS

The event loop has distinct phases, each with its own queue of callbacks to execute:

  1. Timers — callbacks from setTimeout and setInterval whose delay has elapsed
  2. Pending callbacks — I/O callbacks deferred to the next loop iteration
  3. Idle, prepare — internal use only
  4. Poll — retrieve new I/O events; execute I/O callbacks (the main phase)
  5. ChecksetImmediate callbacks execute here
  6. Close callbacks — cleanup callbacks (e.g., socket.on('close', ...))

Between each phase, Node.js drains two additional queues before moving on:

  • process.nextTick queue — runs after the current operation completes, before returning to the event loop. Higher priority than Promises.
  • Promise microtask queue — resolved Promise callbacks (.then(), await continuations)

Both of these run to completion between event loop phases. This matters when reasoning about execution order.

What await Actually Does in Node.js

In TypeScript/Node.js, await does not release a thread. There are no threads to release. What it does is yield control back to the event loop until the awaited Promise resolves.

// Node.js/TypeScript — there is only one thread.
// When we hit await, we yield to the event loop.
// The event loop can process other pending callbacks while the DB query runs.
// When the query completes (via libuv callback), our code resumes.

async function getUser(id: number): Promise<UserDto> {
    // Control yields to the event loop here.
    // Other requests can be handled while the DB query is in-flight.
    const user = await db.query('SELECT * FROM users WHERE id = $1', [id]);

    // We resume here on the same thread — always.
    return mapToDto(user.rows[0]);
}

The single thread is never blocked (assuming the code is written correctly). The event loop keeps spinning, picking up I/O completion callbacks and executing them in turn.

Side-by-Side: Handling 1,000 Concurrent Requests

To make this concrete, here is how ASP.NET Core and a Node.js HTTP server handle the same scenario: 1,000 simultaneous requests each requiring a 100ms database query.

ASP.NET Core (CLR)

sequenceDiagram
    participant R as Request
    participant T1 as Thread-1
    participant Pool as Thread Pool
    participant DB as Database
    participant T42 as Thread-42

    R->>T1: thread pool assigns Thread-1
    T1->>T1: executes middleware
    T1->>DB: await db.QueryAsync()
    T1->>Pool: released to pool
    Note over DB: 100ms passes
    DB->>Pool: DB returns
    Pool->>T42: resumes request
    T42->>R: sends response

The thread pool might have 50-200 threads active. Each await releases a thread but involves OS-level thread context switches. Memory overhead per thread: ~1MB stack.

Node.js

sequenceDiagram
    participant R as Request
    participant EL as Event Loop (Thread-1)
    participant LIB as libuv / OS
    participant DB as Database

    R->>EL: event loop processes it on Thread-1 (only thread)
    EL->>EL: code executes synchronously until first await
    EL->>LIB: await db.query() — libuv sends query to OS
    EL->>EL: picks up next pending request callback
    Note over LIB,DB: 100ms passes
    DB->>LIB: OS signals I/O completion
    LIB->>EL: event loop queues our callback
    EL->>R: executes our callback, sends response

No thread pool. No context switches between threads. Concurrent requests are handled by interleaving execution — each request makes progress whenever its I/O completes, with zero overhead from thread scheduling.

The result: for I/O-bound workloads, Node.js can handle tens of thousands of concurrent connections using a fraction of the memory a thread-per-request model would require.

A Full Request Lifecycle Comparison

Here is a complete picture of how a typical “fetch a user and return JSON” request flows through each system.

ASP.NET Core pipeline:

graph TD
    A["Kestrel"] --> B["Thread Pool: assign thread"]
    B --> C["Middleware 1: logging"]
    C --> D["Middleware 2: auth"]
    D --> E["Routing"]
    E --> F["Controller action invoked"]
    F --> G["await db.Users.FindAsync(id)\n← thread released to pool"]
    G --> H["DB returns\n← thread resumed (possibly different thread)"]
    H --> I["Map to DTO"]
    I --> J["JSON serialization"]
    J --> K["Response written"]
    K --> L["Thread returned to pool"]

Node.js + Express/NestJS pipeline:

graph TD
    A["libuv: TCP connection accepted"] --> B["Event loop: execute request handler"]
    B --> C["Middleware 1: logging (sync)"]
    C --> D["Middleware 2: auth (sync, or await JWT verify)"]
    D --> E["Route matched"]
    E --> F["Controller function called"]
    F --> G["await db.query()\n← yield to event loop"]
    G --> H["Other requests handled during DB wait"]
    H --> I["DB completes: callback queued"]
    I --> J["Event loop: resume our handler"]
    J --> K["Map result to response shape"]
    K --> L["res.json(data)\n← libuv writes to TCP socket"]
    L --> M["Event loop moves to next callback"]

The Node.js model has no thread assignment overhead, no stack allocation per request, and no context switch cost between requests. The trade-off is that everything in the critical path must be non-blocking — something that is easy to get wrong.


Key Differences

ConceptCLR / ASP.NET CoreNode.js
Threading modelMulti-threaded, OS-scheduledSingle-threaded, event loop
Concurrency mechanismThread pool (blocking I/O yields thread)Event loop (async I/O, callbacks/Promises)
await releasesThread back to thread poolYields to event loop (no thread to release)
ConfigureAwait(false)Required to avoid deadlocks in some contextsDoes not exist, not needed
Task.Run(...)Offloads to thread pool threadHas no equivalent for app code; use Worker Threads for CPU-bound work
Task.WhenAll(...)Runs tasks concurrently on pool threadsPromise.all(...) — concurrent I/O on single thread
Startup timeSlow (JIT compilation on first request)Fast (V8 JIT is incremental, lighter startup)
Memory per concurrent connection~1MB per thread stackKilobytes (single thread, heap allocation only)
CPU-bound workRuns on thread pool threads in parallelBlocks the event loop — must use Worker Threads
Garbage collectionGenerational GC, background threadsV8 GC, same thread pauses (though incremental)
Max heap (default)Limited by system RAM~1.5GB by default on 64-bit, configurable with --max-old-space-size

Gotchas for .NET Engineers

1. CPU-Bound Code Blocks Every Concurrent Request

In .NET, running a CPU-intensive calculation blocks one thread. The thread pool picks up the slack with other threads. In Node.js, a CPU-bound operation on the main thread blocks the entire event loop — no other request gets served until it finishes.

// This blocks every other request while it runs.
// In .NET, this would only block the one thread handling this request.
app.get('/fibonacci', (req, res) => {
    const result = fibonacci(45); // 3-4 seconds of CPU time
    res.json({ result }); // Every other request waits here
});

function fibonacci(n: number): number {
    if (n <= 1) return n;
    return fibonacci(n - 1) + fibonacci(n - 2);
}

The fix for genuine CPU-bound work is Node.js Worker Threads, which run JavaScript in a separate thread with its own V8 instance and event loop:

import { Worker, isMainThread, parentPort, workerData } from 'worker_threads';

// main-thread.ts
function runFibonacciInWorker(n: number): Promise<number> {
    return new Promise((resolve, reject) => {
        const worker = new Worker('./fibonacci-worker.js', { workerData: { n } });
        worker.on('message', resolve);
        worker.on('error', reject);
    });
}

// fibonacci-worker.ts — runs in its own thread, won't block the event loop
if (!isMainThread) {
    const { n } = workerData as { n: number };
    const result = fibonacci(n);
    parentPort?.postMessage(result);
}

Worker threads communicate via message passing (like Web Workers in the browser). They do not share memory by default. This is intentionally different from .NET’s shared-memory threading model — it eliminates most race conditions at the cost of serialization overhead. For most web API work, you will not need worker threads. If you are doing report generation, PDF rendering, image processing, or any computation that takes more than 10-20ms, you do.

2. async Functions That Contain Synchronous Blocking Code Are Still Blocking

Marking a function async in TypeScript does not make it non-blocking. It only means it returns a Promise and can use await. If the function body contains no await expressions, it runs synchronously and blocks the event loop for its entire duration.

// This is fully synchronous despite being async.
// Calling await on it still blocks the event loop for the duration of the loop.
async function processLargeArray(items: string[]): Promise<string[]> {
    // No await — this runs synchronously, blocking other requests
    return items.map(item => expensiveTransform(item));
}

// If expensiveTransform is slow, you need to either:
// 1. Move this to a Worker Thread
// 2. Break it into batches with setImmediate() between batches to yield the event loop
async function processLargeArrayYielding(items: string[]): Promise<string[]> {
    const results: string[] = [];
    for (const item of items) {
        results.push(expensiveTransform(item));
        // Yield to the event loop every 100 items
        if (results.length % 100 === 0) {
            await new Promise(resolve => setImmediate(resolve));
        }
    }
    return results;
}

3. There Is No ConfigureAwait(false), and You Do Not Need It

In C#, ConfigureAwait(false) is best practice in library code to avoid deadlocks caused by capturing the SynchronizationContext. .NET engineers sometimes reach for it out of habit when writing TypeScript.

In Node.js, there is no SynchronizationContext. There is only one thread. await always resumes on the event loop. There is no deadlock risk from sync-over-async because there is nothing to deadlock against. Do not look for an equivalent. Do not worry about it.

What you should worry about in its place: not forgetting await. In C#, forgetting await is often caught by the compiler or immediately visible because Task<T> is not assignable to T. In TypeScript, Promise<User> is assignable to… basically anything if you are not careful with your types, and the Promise will resolve silently in the background. Enable @typescript-eslint/no-floating-promises in your ESLint configuration. It will catch this class of bug at lint time.

// TypeScript won't always catch this without the lint rule
async function updateUser(id: number, data: UpdateUserDto): Promise<void> {
    // Missing await — the update runs, but we don't wait for it.
    // The function returns before the DB write completes.
    // The caller has no idea anything went wrong.
    db.update(users).set(data).where(eq(users.id, id)); // no await
}

4. Promise.all Concurrency Is Different from Task.WhenAll Parallelism

Task.WhenAll in C# runs tasks truly in parallel on multiple threads. Promise.all in Node.js runs Promises concurrently on a single thread — the I/O operations overlap, but the JavaScript code between await points still executes sequentially.

For I/O-bound work (API calls, DB queries), the practical result is similar — both approaches finish when the slowest operation finishes. For CPU-bound work, Promise.all provides no benefit because only one piece of code runs at a time.

// These two approaches have identical performance for I/O-bound operations.
// Neither is "parallel" in the CPU sense — both are concurrent I/O.

// Sequential — total time: 300ms + 200ms + 100ms = 600ms
const user = await fetchUser(id);
const orders = await fetchOrders(id);
const preferences = await fetchPreferences(id);

// Concurrent — total time: max(300ms, 200ms, 100ms) = 300ms
const [user, orders, preferences] = await Promise.all([
    fetchUser(id),
    fetchOrders(id),
    fetchPreferences(id),
]);

Use Promise.all (or Promise.allSettled) any time you have independent async operations that do not depend on each other’s results. This is the equivalent of Task.WhenAll and has the same performance benefit for I/O-bound work.

5. The Memory Model Is Not What You Expect

Node.js has a default heap limit of roughly 1.5GB on 64-bit systems. You can raise it with --max-old-space-size=4096 (to 4GB), but you cannot exceed physical RAM. Unlike the CLR, which has decades of optimization for large heaps and server workloads, V8’s GC is optimized for shorter-lived objects and smaller heaps.

For most web APIs, this is not a problem. Where it becomes one: in-memory caching of large datasets, streaming large file uploads into memory, and processing large JSON payloads without streaming. Know the limit exists, monitor your heap usage in production via Sentry or process.memoryUsage(), and prefer streaming approaches for large data.


Hands-On Exercise

This exercise demonstrates the event loop’s behavior concretely. Run it and verify you can predict the output before you do.

Create a file event-loop-demo.ts:

import { createServer } from 'http';
import { setTimeout as sleep } from 'timers/promises';

// Simulate two types of work:
// 1. I/O-bound: a DB query that takes 100ms
// 2. CPU-bound: a loop that takes 100ms

function cpuBound100ms(): void {
    const start = Date.now();
    while (Date.now() - start < 100) {
        // Busy-wait — burns CPU, blocks event loop
    }
}

async function ioBound100ms(): Promise<void> {
    // Yields to event loop for 100ms, does not block it
    await sleep(100);
}

let requestCount = 0;

const server = createServer(async (req, res) => {
    const id = ++requestCount;
    const start = Date.now();
    console.log(`[${id}] Request started at ${start}`);

    if (req.url === '/cpu') {
        cpuBound100ms(); // Blocks the event loop
    } else {
        await ioBound100ms(); // Yields to the event loop
    }

    const duration = Date.now() - start;
    console.log(`[${id}] Request done in ${duration}ms`);
    res.end(JSON.stringify({ id, duration }));
});

server.listen(3000, () => {
    console.log('Server running on port 3000');
});

Run it with:

npx ts-node event-loop-demo.ts

Then in a second terminal, send two concurrent requests to /io:

curl http://localhost:3000/io & curl http://localhost:3000/io &
wait

Both should complete in ~100ms total because they overlap. Now try with /cpu:

curl http://localhost:3000/cpu & curl http://localhost:3000/cpu &
wait

The second request will take ~200ms — it had to wait for the first CPU-bound request to finish before the event loop could pick it up. This is the event loop blocking in practice.

Next, extend the exercise:

  1. Move the CPU-bound work to a Worker Thread and verify that both concurrent requests now complete in ~100ms.
  2. Add console.log calls around process.nextTick and Promise.resolve().then() to observe microtask queue ordering.

Reference solution structure (fill in the Worker Thread implementation):

import { Worker } from 'worker_threads';
import { fileURLToPath } from 'url';
import { dirname, join } from 'path';

function runCpuBoundInWorker(): Promise<void> {
    return new Promise((resolve, reject) => {
        const worker = new Worker(
            join(dirname(fileURLToPath(import.meta.url)), 'cpu-worker.ts'),
            // ts-node/esm handles the TypeScript — in production, compile first
        );
        worker.once('message', resolve);
        worker.once('error', reject);
    });
}

Quick Reference

.NET / CLR ConceptNode.js EquivalentNotes
Thread poolEvent loopConceptually different — Node has one thread, not a pool
await releases threadawait yields to event loopNo thread to release; same thread always resumes
Task.Run(() => ...)new Worker(...)Worker Threads for CPU-bound work only
Task.WhenAll(...)Promise.all(...)Concurrent I/O, not parallel CPU execution
Task.WhenAny(...)Promise.race(...)Resolves when the first Promise resolves
Task.Delay(ms)await setTimeout(ms) from timers/promisesOr new Promise(r => setTimeout(r, ms))
ConfigureAwait(false)NothingDoes not exist, not needed
CancellationTokenAbortController / AbortSignalWeb-standard API, works with fetch and newer Node.js APIs
IHostedServiceNode.js process itself / setInterval / WorkerBackground tasks run in the same process; see Article 4.6 for job queues
GC.Collect()--expose-gc + global.gc()Never use in production; only for benchmarking
DOTNET_GC_HEAP_HARD_LIMIT--max-old-space-size=<MB>Node.js CLI flag to increase heap limit
Environment.ProcessorCountos.cpus().lengthNumber of logical CPUs available
Thread.CurrentThread.ManagedThreadId(always 1 for main thread)Worker threads have no equivalent ID concept
JIT compilation warmupV8 incremental compilationNode.js starts faster; hot paths JIT over time

Further Reading

Package Management: NuGet vs. npm/pnpm

For .NET engineers who know: NuGet, MSBuild, dotnet add package, packages.lock.json, and the NuGet cache at %USERPROFILE%\.nuget\packages You’ll learn: How npm and pnpm package management maps to NuGet, where it diverges, and why those differences cause real problems if you don’t understand them Time: 15-20 minutes


The .NET Way (What You Already Know)

NuGet is the package manager for the .NET ecosystem. You reference packages in your .csproj file, restore them with dotnet restore, and the runtime uses the package graph to resolve dependencies. The key properties of this system:

  • Centralized registry: NuGet.org is the default and dominant package source. Private feeds (Azure Artifacts, GitHub Packages) are opt-in.
  • Version pinning by default: When you dotnet add package Newtonsoft.Json, you get an exact version in your .csproj. No ranges unless you write them manually.
  • Global cache: All packages are stored once in ~/.nuget/packages and shared across every project on your machine. Two projects using Newtonsoft.Json 13.0.3 share the same on-disk files.
  • MSBuild integration: Package restore is part of the build pipeline. dotnet build runs restore implicitly.
  • Lockfile is optional: packages.lock.json exists but most projects don’t use it. Reproducibility is ensured by version pinning in .csproj.
  • Transitive dependency resolution: NuGet resolves the full dependency graph and selects the lowest applicable version that satisfies all constraints — a “nearest wins” strategy when conflicts arise.
<!-- .csproj — explicit, versioned, readable -->
<ItemGroup>
  <PackageReference Include="Microsoft.EntityFrameworkCore" Version="8.0.0" />
  <PackageReference Include="Newtonsoft.Json" Version="13.0.3" />
  <PackageReference Include="Serilog.AspNetCore" Version="8.0.0" />
</ItemGroup>

This is the mental model you’re bringing to npm. Some of it maps cleanly. A significant portion does not.


The npm/pnpm Way

package.json vs. .csproj

The package.json file is the npm equivalent of .csproj — it declares your package identity, dependencies, and scripts. The structural difference is that .csproj is XML with a tight MSBuild contract, while package.json is a freeform JSON document with only a few reserved keys.

// package.json — the JS equivalent of .csproj
{
  "name": "my-api",
  "version": "1.0.0",
  "private": true,
  "dependencies": {
    "express": "^4.18.2",
    "zod": "^3.22.4"
  },
  "devDependencies": {
    "typescript": "^5.3.3",
    "@types/express": "^4.17.21",
    "vitest": "^1.2.0"
  },
  "scripts": {
    "build": "tsc",
    "dev": "ts-node-dev src/main.ts",
    "test": "vitest run",
    "lint": "eslint src"
  }
}

The dependencies / devDependencies split is the first conceptual shift: devDependencies contains packages only needed during development and build (compilers, test runners, linters). They are not installed in production environments when you run npm install --production or pnpm install --prod. In .NET there is no equivalent concept at the package reference level — build tools are either part of the SDK or handled by MSBuild targets.

Semver Ranges: The ^ and ~ Problem

This is the most operationally significant difference from NuGet.

In .csproj, Version="13.0.3" means exactly 13.0.3. In package.json, version strings are ranges by default:

SyntaxMeaning.NET equivalent
"4.18.2"Exactly 4.18.2Version="4.18.2"
"^4.18.2">=4.18.2 and <5.0.0 (compatible minor/patch)No direct equivalent
"~4.18.2">=4.18.2 and <4.19.0 (patch only)No direct equivalent
">=4.0.0"4.0.0 or higherNo direct equivalent
"*"Any versionClosest: omit the version entirely

When you run npm install or pnpm install, the package manager resolves each range to a specific version based on what’s currently published on the registry. Run it again six months later and you may get different versions — not because you changed anything, but because new patch releases were published within the allowed range.

This is why lockfiles exist and must be committed.

Lockfiles: Your Reproducibility Guarantee

In .NET, version pinning in .csproj provides reproducibility. In npm/pnpm, version ranges in package.json mean the lockfile is the reproducibility mechanism.

ConceptNuGetnpmpnpm
Manifest.csprojpackage.jsonpackage.json
Lockfilepackages.lock.json (optional)package-lock.jsonpnpm-lock.yaml
Install from lockfiledotnet restore --locked-modenpm cipnpm install --frozen-lockfile
Cache location~/.nuget/packages~/.npm (content-addressed)~/.local/share/pnpm/store

The lockfile records the exact resolved version of every dependency and transitive dependency. It must be committed to source control. It must be used in CI. Without it, two engineers cloning the same repo may install different package versions.

In CI, always use the locked install command:

# CI pipelines — install exactly what the lockfile specifies
pnpm install --frozen-lockfile    # fails if lockfile is out of date
npm ci                             # npm equivalent

Never use npm install or pnpm install without --frozen-lockfile in CI — these commands update the lockfile if ranges are satisfied by newer versions, defeating the purpose of locking.

node_modules vs. the NuGet Cache

The NuGet cache stores packages once at ~/.nuget/packages and all projects share them. npm installs packages directly into a node_modules folder inside each project. This has significant consequences:

The node_modules size problem: A freshly bootstrapped Next.js project can have 300MB+ in node_modules. If you work on five projects, that’s potentially 1.5GB of packages — most of them duplicates. This is not a hypothetical. It is a daily reality. The node_modules folder is famously joked about as the heaviest object in the universe for a reason.

You never commit node_modules. Every .gitignore for a Node.js project must include node_modules/. This is not optional. Committing node_modules is one of the few genuinely catastrophic mistakes a .NET engineer new to the JS ecosystem can make — it adds hundreds of megabytes to the repository and breaks everything downstream.

Why pnpm solves this: pnpm uses a content-addressed global store (similar to NuGet’s cache) and creates symlinks in node_modules rather than copying files. A package used by five projects is stored once on disk. This is the primary reason we use pnpm instead of npm.

graph TD
    subgraph NuGet["NuGet (all projects share one cache)"]
        NC["~/.nuget/packages/newtonsoft.json/13.0.3/\n← stored once"]
    end

    subgraph npm["npm (each project has its own copy)"]
        NA["project-a/node_modules/lodash/\n← full copy"]
        NB["project-b/node_modules/lodash/\n← another full copy"]
        NC2["project-c/node_modules/lodash/\n← another full copy"]
    end

    subgraph pnpm["pnpm (symlinks to one global store)"]
        PS["~/.local/share/pnpm/store/v3/lodash/4.17.21/\n← stored once"]
        PA["project-a/node_modules/lodash"]
        PB["project-b/node_modules/lodash"]
        PC["project-c/node_modules/lodash"]
        PA -->|symlink| PS
        PB -->|symlink| PS
        PC -->|symlink| PS
    end

Installing and Managing Packages

The CLI commands map cleanly once you understand the structure:

# Adding a package
dotnet add package Newtonsoft.Json --version 13.0.3
pnpm add zod                          # adds to dependencies, installs latest
pnpm add -D typescript                # adds to devDependencies (-D)
pnpm add zod@3.22.4                   # exact version

# Removing a package
dotnet remove package Newtonsoft.Json
pnpm remove zod

# Restoring/installing all packages
dotnet restore
pnpm install

# Listing installed packages
dotnet list package
pnpm list

# Updating packages
dotnet add package Newtonsoft.Json   # re-add to get latest compatible
pnpm update zod                       # updates within semver range
pnpm update zod --latest              # updates to latest, ignores range

Scripts in package.json vs. MSBuild Targets

MSBuild provides a task system with BeforeBuild, AfterBuild, custom Target elements, and rich dependency modeling. The scripts section in package.json is a simpler equivalent: named shell commands that can be invoked with pnpm run <name>.

<!-- .csproj MSBuild target -->
<Target Name="GenerateApiClient" BeforeTargets="Build">
  <Exec Command="nswag run nswag.json" />
</Target>
// package.json scripts
{
  "scripts": {
    "build": "tsc --project tsconfig.json",
    "build:full": "pnpm run generate && pnpm run build",
    "generate": "openapi-typescript api.yaml -o src/api-types.ts",
    "dev": "ts-node-dev --respawn src/main.ts",
    "test": "vitest run",
    "test:watch": "vitest",
    "test:coverage": "vitest run --coverage",
    "lint": "eslint src --ext .ts",
    "lint:fix": "eslint src --ext .ts --fix",
    "typecheck": "tsc --noEmit"
  }
}

Scripts run with pnpm run <name>, or for common ones like build, test, dev, and start, you can omit run:

pnpm build          # runs the "build" script
pnpm dev            # runs the "dev" script
pnpm run lint:fix   # "run" is required for names with colons/custom names

There is no direct equivalent to MSBuild’s dependency graph between targets. If you need script A to run before script B, you either chain them explicitly ("build:full": "pnpm run generate && pnpm run build") or use a tool like Turborepo (covered in Article 6.5).

You can also run scripts from packages directly without adding them to scripts:

pnpm exec tsc --version        # run a locally installed binary
pnpm dlx create-next-app@latest  # run without installing (like dotnet tool run)

Global vs. Local Installs

In .NET, global tools are installed with dotnet tool install -g and available everywhere. In npm/pnpm, global installs exist but are discouraged:

# Global install (works, but avoid if possible)
pnpm add -g typescript
tsc --version    # now available globally

# Local install (preferred)
pnpm add -D typescript
pnpm exec tsc --version    # run via pnpm exec
# or add to package.json scripts and run via pnpm run

The reason to prefer local installs: reproducibility. If TypeScript is installed globally at version 5.2 on your machine but a colleague has 5.0, you will get different compilation behavior. Local installs pin the version in package.json and the lockfile, guaranteeing every developer and CI pipeline uses the same version.

The one exception is project scaffolding tools you run once (create-next-app, nest new). For those, use pnpm dlx (equivalent to npx) to run them transiently without installing:

pnpm dlx create-next-app@latest my-app
pnpm dlx @nestjs/cli@latest new my-api

Auditing Dependencies for Vulnerabilities

npm and pnpm have built-in audit commands that check your dependency tree against a vulnerability database:

pnpm audit                    # show all vulnerabilities
pnpm audit --audit-level high # fail only for high/critical
pnpm audit --fix              # attempt to fix by upgrading within ranges

The output maps severity levels (critical, high, moderate, low, info) to CVE identifiers and the affected package. A typical audit finding:

┌─────────────────────────────────────────────┐
│                    moderate                   │
│   Prototype Pollution in lodash               │
│   Package: lodash                             │
│   Patched in: >=4.17.21                       │
│   Dependency of: my-lib > some-package        │
│   Path: my-lib > some-package > lodash        │
│   More info: https://npmjs.com/advisories/... │
└─────────────────────────────────────────────┘

The Path field is important — it shows that the vulnerable lodash is a transitive dependency (your dependency some-package depends on it, not your code directly). Fixing it may require waiting for some-package to publish an updated version, or overriding the transitive dependency version using pnpm’s overrides field:

// package.json — override a transitive dependency version
{
  "pnpm": {
    "overrides": {
      "lodash": ">=4.17.21"
    }
  }
}

This is the equivalent of the <PackageReference> version floor override in NuGet. It forces pnpm to use at least 4.17.21 regardless of what transitive dependencies request.

For team-wide continuous scanning, we integrate Snyk (covered in Article 7.3) into the CI pipeline. The built-in pnpm audit is useful for immediate checks; Snyk provides richer reporting and automated fix PRs.


Key Differences

ConceptNuGet (.NET)npm/pnpm (JS/TS)
Package manifest.csproj XMLpackage.json
RegistryNuGet.orgnpmjs.com
Version defaultExact (13.0.3)Range (^13.0.3)
Lockfilepackages.lock.json (optional)pnpm-lock.yaml (required)
Package storageGlobal cache ~/.nuget/packagesLocal node_modules/ + global pnpm store
Disk efficiencyHigh (single global cache)Low (npm), High (pnpm)
Commit packagesNeverNever (same)
Build scriptsMSBuild targetsscripts in package.json
Global toolsdotnet tool install -gpnpm add -g (avoid) or pnpm dlx
Locked CI installdotnet restore --locked-modepnpm install --frozen-lockfile
Vulnerability auditdotnet list package --vulnerablepnpm audit
Dependency scopeAll deps compile to the projectdependencies vs devDependencies
Phantom depsNot possible (explicit references)Possible with npm/yarn (not with pnpm)

Gotchas for .NET Engineers

1. Phantom Dependencies Will Burn You — Use pnpm’s Strict Mode

In .NET, if you want to use a library, you must add a <PackageReference> to your .csproj. The compiler will not let you use code from a package you haven’t explicitly declared.

npm’s flat node_modules structure breaks this contract. When package A depends on lodash, npm hoists lodash to the top-level node_modules folder. Your code can now import lodash and it will work — even though lodash is not in your package.json. This is a phantom dependency: you’re using a package you never declared.

The problem: when package A later drops its lodash dependency or pins a different version, your code silently breaks at runtime with a module-not-found error or, worse, a subtle behavior change.

pnpm prevents this by design. Its node_modules structure uses symlinks and only makes explicitly declared packages importable. Attempting to import a phantom dependency throws an error immediately, at development time. This is a primary reason we use pnpm.

# With npm (phantom dependency works silently)
npm install some-package   # some-package depends on lodash
# now "import lodash" works in your code — dangerous

# With pnpm (phantom dependency caught immediately)
pnpm add some-package      # some-package depends on lodash
# "import lodash" throws: Cannot find module 'lodash'
# Correct fix: pnpm add lodash (make it explicit)

If you inherit a project that was using npm and migrate to pnpm, the phantom dependency audit can reveal dozens of packages your code relies on but never declared.

2. Peer Dependency Warnings Are Not Optional — Address Them

When you install a package in .NET, NuGet resolves the dependency graph and silently picks compatible versions. In npm/pnpm, some packages declare peer dependencies: packages they require you to have installed separately, at a specific version range, in your own project.

A common example is a React component library that lists react as a peer dependency rather than a direct dependency, because it should use your version of React, not install its own.

 WARN  Issues with peer dependencies found
└─┬ @tanstack/react-query 5.18.1
  └── ✕ unmet peer react@^18.0.0: found 17.0.2

This warning means: @tanstack/react-query@5.18.1 requires React 18, but your project has React 17. In .NET, NuGet would refuse to install or show a binding redirect. In npm, the installation succeeds with a warning — and you get subtle runtime failures or simply broken behavior.

Peer dependency warnings must be resolved, not ignored. The fix is either to upgrade the peer (upgrade React to 18) or to find a version of the package that supports your current peer version.

3. The node_modules Folder Is Not the Global Cache — Delete It Freely

NuGet’s global cache at ~/.nuget/packages is precious — clearing it forces re-downloading everything. node_modules is not the cache; it is a local installation folder that can always be recreated from the lockfile.

The correct mental model: node_modules is a build artifact, like your bin/ and obj/ folders. You can and should delete it when things behave strangely:

# The Node.js equivalent of "clean solution"
rm -rf node_modules
pnpm install

# In a monorepo — delete all node_modules recursively
find . -name "node_modules" -type d -prune -exec rm -rf '{}' +
pnpm install

This is standard practice. Engineers run it several times a week when debugging dependency issues. Unlike deleting the NuGet cache, it does not trigger a network re-download if pnpm’s global store already has the packages.

4. The ^ Range Bites You in Lockfile-Free Environments

If you ever run pnpm install without a lockfile present — which happens when you clone a fresh repo before someone committed the lockfile, or when a lockfile is incorrectly gitignored — you will install whatever the latest versions within each range are at that moment. Two engineers cloning the same repo on different days can end up with different transitive dependency versions.

The lockfile must be committed. If pnpm-lock.yaml is in your .gitignore, remove it immediately. Check your .gitignore template — some generic Node.js templates include lockfiles in .gitignore by mistake.

# Verify your lockfile is tracked
git ls-files pnpm-lock.yaml   # should output the filename if tracked
git ls-files package-lock.json

If the lockfile exists but shows constant churn in git diff with no actual dependency changes, the cause is usually different engineers using different package manager versions. Standardize the pnpm version in package.json:

{
  "packageManager": "pnpm@8.15.1"
}

5. npm Scripts Have No Dependency Graph — Order Is Your Responsibility

MSBuild targets can declare BeforeTargets, AfterTargets, and DependsOnTargets, and the build system executes them in the correct order. package.json scripts have none of this. They are independent shell commands. Ordering is enforced only by explicit chaining:

{
  "scripts": {
    "prebuild": "pnpm run generate",  // "pre" prefix runs before "build"
    "build": "tsc",
    "postbuild": "pnpm run copy-assets"  // "post" prefix runs after "build"
  }
}

The pre<name> and post<name> convention exists for simple sequencing, but it becomes unwieldy for complex pipelines. For monorepos with inter-package dependencies, use Turborepo (Article 6.5), which provides the dependency graph that package.json scripts lack.


Hands-On Exercise

This exercise sets up a minimal TypeScript project from scratch using pnpm and exercises the core package management concepts covered in this article.

Prerequisites: pnpm installed (npm install -g pnpm or via brew install pnpm), Node.js 20+.

Step 1: Create the project

mkdir pkg-exercise && cd pkg-exercise
pnpm init

This creates a minimal package.json. Open it and observe the structure.

Step 2: Add dependencies with intentional version variation

pnpm add zod                    # adds with ^ range (e.g., "^3.22.4")
pnpm add -D typescript          # dev dependency
pnpm add -D @types/node

Inspect package.json — note the ^ prefixes. Inspect pnpm-lock.yaml — note the exact resolved versions.

Step 3: Understand the lockfile

# Simulate a fresh clone
rm -rf node_modules
pnpm install --frozen-lockfile   # installs exact versions from lockfile

Now modify the zod version range in package.json to "zod": "^3.0.0" without running pnpm install. Then run:

pnpm install --frozen-lockfile   # this should fail

The --frozen-lockfile flag fails because your package.json range no longer matches the lockfile. This is how CI catches drift. Revert the change.

Step 4: Audit the dependency tree

pnpm list                         # show installed packages
pnpm list --depth 3               # show transitive dependencies
pnpm audit                        # check for vulnerabilities

Step 5: Add a script and run it

Add to package.json:

{
  "scripts": {
    "typecheck": "tsc --noEmit",
    "build": "tsc"
  }
}

Create a tsconfig.json:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "strict": true,
    "outDir": "dist"
  },
  "include": ["src"]
}

Create src/index.ts:

import { z } from "zod";

const UserSchema = z.object({
  name: z.string(),
  email: z.string().email(),
});

type User = z.infer<typeof UserSchema>;

const user: User = UserSchema.parse({ name: "Alice", email: "alice@example.com" });
console.log(user);

Run it:

pnpm typecheck    # type-checks without emitting files
pnpm build        # compiles to dist/
node dist/index.js

Step 6: Observe pnpm’s strict dependency isolation

Create a second file src/attempt-phantom.ts:

// This would work with npm (if zod's deps included something useful)
// With pnpm, you can only import what you explicitly installed
import { z } from "zod";  // works — declared in package.json

// If you tried to import something not in package.json:
// import _ from "lodash";  // would fail: Cannot find module 'lodash'
// Fix: pnpm add lodash

This reinforces that pnpm enforces explicit dependency declarations — the same contract NuGet enforces in .NET.


Quick Reference

Command Mapping

dotnet CLIpnpm equivalent
dotnet restorepnpm install
dotnet restore --locked-modepnpm install --frozen-lockfile
dotnet add package Foopnpm add foo
dotnet add package Foo --version 1.2.3pnpm add foo@1.2.3
dotnet remove package Foopnpm remove foo
dotnet list packagepnpm list
dotnet list package --outdatedpnpm outdated
dotnet list package --vulnerablepnpm audit
dotnet buildpnpm build (runs build script)
dotnet testpnpm test (runs test script)
dotnet tool install -g foopnpm add -g foo (prefer pnpm dlx)
dotnet tool run foopnpm exec foo
dotnet new foo (one-off scaffolding)pnpm dlx create-foo@latest

package.json Structure Reference

{
  "name": "my-project",
  "version": "1.0.0",
  "private": true,
  "packageManager": "pnpm@8.15.1",

  "dependencies": {
    "zod": "^3.22.4"
  },

  "devDependencies": {
    "typescript": "^5.3.3",
    "@types/node": "^20.11.0",
    "vitest": "^1.2.0"
  },

  "scripts": {
    "build": "tsc",
    "dev": "ts-node-dev src/main.ts",
    "test": "vitest run",
    "test:watch": "vitest",
    "typecheck": "tsc --noEmit",
    "lint": "eslint src"
  },

  "pnpm": {
    "overrides": {
      "some-vulnerable-dep": ">=4.17.21"
    }
  }
}

Version Range Quick Reference

RangeAllowsExample resolves to
"1.2.3"Only 1.2.31.2.3
"^1.2.3">=1.2.3 <2.0.0Latest 1.x.x
"~1.2.3">=1.2.3 <1.3.0Latest 1.2.x
">=1.2.3"1.2.3 or higherLatest overall
"1.x">=1.0.0 <2.0.0Latest 1.x.x
"*"Any versionAbsolute latest

Critical .gitignore Entries

# Always ignore — recreated by pnpm install
node_modules/

# Never ignore — required for reproducibility
# (make sure these lines are NOT in your .gitignore)
# pnpm-lock.yaml
# package-lock.json
# yarn.lock

Common Diagnostic Commands

pnpm why <package>         # why is this package installed? (like NuGet dependency graph)
pnpm list --depth 10       # full dependency tree
pnpm store path            # location of pnpm's global store
pnpm store prune           # clean up unused packages from global store
pnpm dedupe                # optimize lockfile by deduplicating dependencies

Further Reading

Project Structure: Solutions & Projects vs. Monorepos & Workspaces

For .NET engineers who know: .sln files, .csproj files, Project References, and MSBuild’s role in assembling multi-project solutions. You’ll learn: How the JS/TS ecosystem maps Solution → Monorepo, Project → Package, Assembly → npm package, and how to navigate an unfamiliar JS/TS codebase from the file system up. Time: 10-15 minutes


The .NET Way (What You Already Know)

In .NET, physical organization and logical organization are both handled by a pair of files: the Solution (.sln) and the Project (.csproj). The solution is the container — it lists which projects belong together and in what order to build them. Each project compiles to a single assembly (.dll or .exe), has its own dependencies defined in the .csproj, and can reference other projects in the same solution via <ProjectReference>.

A typical multi-project solution looks like this:

MyApp.sln
├── src/
│   ├── MyApp.Api/
│   │   ├── MyApp.Api.csproj       # references MyApp.Core
│   │   └── Controllers/
│   ├── MyApp.Core/
│   │   ├── MyApp.Core.csproj      # no internal references
│   │   ├── Domain/
│   │   └── Services/
│   └── MyApp.Infrastructure/
│       ├── MyApp.Infrastructure.csproj  # references MyApp.Core
│       └── Data/
└── tests/
    └── MyApp.Tests/
        ├── MyApp.Tests.csproj     # references MyApp.Api, MyApp.Core
        └── ...

The .csproj is doing two things at once: it defines how the project compiles (target framework, nullable settings, warnings-as-errors), and it defines what the project depends on (NuGet packages and project references). The SDK-style .csproj makes this reasonably concise.

The .sln ties everything together. Visual Studio reads it to know which projects to load. dotnet build MyApp.sln builds all of them in dependency order. dotnet test MyApp.sln finds all test projects automatically.

This model has two important properties that you should keep in mind as you read the rest of this article: strong isolation (each project is its own assembly, its own namespace root, its own compilation unit) and explicit dependency graph (project references are declared, not implied).


The JS/TS Way

Single-App Projects

Before covering monorepos, it is worth knowing what a standalone JS/TS project looks like, because you will encounter these at least as often as monorepos.

The root-level package.json is the entry point for understanding any JS/TS project — it is the rough equivalent of reading a .csproj file, except it also contains the build scripts, the test runner invocation, and sometimes the project’s entire configuration surface.

A standalone Next.js application:

my-next-app/
├── package.json          # dependencies, scripts, name/version
├── package-lock.json     # lockfile (or pnpm-lock.yaml if using pnpm)
├── tsconfig.json         # TypeScript compiler configuration
├── next.config.ts        # Next.js framework configuration
├── .env.example          # documented environment variable template
├── .env.local            # actual secrets (never committed)
├── src/
│   ├── app/              # App Router: file-system-based routing
│   │   ├── layout.tsx    # root layout (like _Layout.cshtml)
│   │   ├── page.tsx      # root page (/)
│   │   ├── globals.css
│   │   └── dashboard/
│   │       └── page.tsx  # /dashboard route
│   ├── components/       # shared React components
│   ├── lib/              # utility functions, type definitions
│   └── types/            # global TypeScript type declarations
└── public/               # static assets (no build processing)

A standalone NestJS API:

my-nest-api/
├── package.json
├── tsconfig.json
├── tsconfig.build.json   # extends tsconfig.json, excludes tests
├── nest-cli.json         # NestJS build tooling configuration
├── .env.example
├── src/
│   ├── main.ts           # application bootstrap (like Program.cs)
│   ├── app.module.ts     # root module (like Startup.cs / IServiceCollection)
│   ├── app.controller.ts
│   ├── app.service.ts
│   └── users/            # feature module (like a domain project)
│       ├── users.module.ts
│       ├── users.controller.ts
│       ├── users.service.ts
│       ├── dto/
│       │   ├── create-user.dto.ts
│       │   └── update-user.dto.ts
│       └── entities/
│           └── user.entity.ts
└── test/
    ├── app.e2e-spec.ts
    └── jest-e2e.json

Note the NestJS layout: the users/ directory is a self-contained feature slice — module, controller, service, DTOs, and entities all in one folder. This is a NestJS convention, not a file-system-enforced rule. It resembles organizing by domain in ASP.NET Core (vertical slice architecture), but there is no compiler telling you about it.

Monorepos

When your project grows to include multiple apps or shared libraries, a monorepo packages them together under a single repository root. This is the direct equivalent of a .sln containing multiple .csproj files.

The structure for a monorepo containing a Next.js frontend, a NestJS API, and a shared types package:

my-project/                         # .sln equivalent (the root)
├── package.json                    # root package.json (workspace definition)
├── pnpm-workspace.yaml             # tells pnpm which folders are packages
├── pnpm-lock.yaml                  # single lockfile for the entire monorepo
├── turbo.json                      # Turborepo build orchestration config
├── tsconfig.base.json              # base TypeScript config, extended by all packages
├── .env.example
├── apps/
│   ├── web/                        # Next.js frontend
│   │   ├── package.json            # name: "@myproject/web"
│   │   ├── tsconfig.json           # extends ../../tsconfig.base.json
│   │   ├── next.config.ts
│   │   └── src/
│   └── api/                        # NestJS backend
│       ├── package.json            # name: "@myproject/api"
│       ├── tsconfig.json
│       ├── nest-cli.json
│       └── src/
└── packages/
    ├── types/                      # shared TypeScript types
    │   ├── package.json            # name: "@myproject/types"
    │   ├── tsconfig.json
    │   └── src/
    │       └── index.ts
    └── utils/                      # shared utility functions
        ├── package.json            # name: "@myproject/utils"
        ├── tsconfig.json
        └── src/
            └── index.ts

The root package.json in a pnpm workspace looks like this:

{
  "name": "my-project",
  "private": true,
  "scripts": {
    "dev": "turbo run dev",
    "build": "turbo run build",
    "test": "turbo run test",
    "lint": "turbo run lint"
  },
  "devDependencies": {
    "turbo": "^2.0.0",
    "typescript": "^5.4.0"
  }
}

And pnpm-workspace.yaml:

packages:
  - "apps/*"
  - "packages/*"

This file is roughly equivalent to the project list inside a .sln file. It tells pnpm: these directories are the packages that make up this workspace. Every directory listed here must have its own package.json.

Workspace Dependencies (Project References)

When @myproject/web needs types from @myproject/types, you declare that dependency in apps/web/package.json using the workspace: protocol:

{
  "name": "@myproject/web",
  "dependencies": {
    "@myproject/types": "workspace:*"
  }
}
// .NET equivalent in MyApp.Api.csproj:
// <ProjectReference Include="../MyApp.Core/MyApp.Core.csproj" />

The workspace:* syntax tells pnpm: use the local version of this package, not the one from the npm registry. At build time, pnpm symlinks the packages together. No separate publish step is required for internal packages during development.

tsconfig.json: The .csproj for Compilation

tsconfig.json is the TypeScript compiler configuration file. It controls what TypeScript compiles, how, and what rules apply. For .NET engineers, the mental model is: .csproj’s <PropertyGroup> section, but for the TypeScript compiler rather than the C# compiler.

A representative tsconfig.json for a NestJS API:

{
  "compilerOptions": {
    "module": "commonjs",
    "target": "ES2022",
    "lib": ["ES2022"],
    "strict": true,
    "esModuleInterop": true,
    "experimentalDecorators": true,
    "emitDecoratorMetadata": true,
    "outDir": "./dist",
    "rootDir": "./src",
    "paths": {
      "@/*": ["./src/*"]
    }
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules", "dist", "test"]
}

Mapped to .csproj equivalents:

tsconfig.json.csproj / MSBuild equivalent
"target": "ES2022"<TargetFramework>net9.0</TargetFramework>
"strict": true<Nullable>enable</Nullable> + <TreatWarningsAsErrors>true</TreatWarningsAsErrors>
"outDir": "./dist"<OutputPath>bin/Release</OutputPath>
"rootDir": "./src"<Compile Include="src\**\*.cs" />
"paths"N/A in .csproj; similar to project references or using aliases
"exclude"<Compile Remove="..." />
"experimentalDecorators": trueN/A — decorators are not part of C#

In a monorepo, you typically have a tsconfig.base.json at the root that defines shared settings, then each package’s tsconfig.json extends it and adds package-specific overrides:

// packages/types/tsconfig.json
{
  "extends": "../../tsconfig.base.json",
  "compilerOptions": {
    "outDir": "dist",
    "rootDir": "src",
    "declaration": true,
    "declarationMap": true
  },
  "include": ["src"]
}

The "declaration": true option generates .d.ts type definition files alongside the compiled JavaScript. This is how TypeScript packages expose their types to consumers — the rough equivalent of having a public API surface that other assemblies can reference.

Build Orchestration with Turborepo

Turborepo (often shortened to “turbo”) is the build orchestrator for JS/TS monorepos. The closest .NET analogy is MSBuild’s dependency graph resolution: turbo understands which packages depend on which, and it builds them in the right order while parallelizing everything it safely can.

The turbo.json configuration:

{
  "$schema": "https://turbo.build/schema.json",
  "tasks": {
    "build": {
      "dependsOn": ["^build"],
      "outputs": ["dist/**", ".next/**"]
    },
    "dev": {
      "cache": false,
      "persistent": true
    },
    "test": {
      "dependsOn": ["build"]
    },
    "lint": {}
  }
}

The "^build" syntax means “run the build task in all dependencies first.” So if @myproject/web depends on @myproject/types, running turbo build in the root will build types before building web. This is the behavior you get for free from MSBuild with <ProjectReference> — turbo makes it explicit.

Turbo also provides build caching: if nothing in a package has changed since the last build, turbo skips that package’s build entirely and restores outputs from cache. This is similar to incremental compilation in MSBuild, but turbo’s caching is more aggressive and can be distributed across CI runners.


Key Differences

Concept.NETJS/TS
Workspace container.sln filepnpm-workspace.yaml + root package.json
Package/project definition.csprojpackage.json in each app/package directory
Compilation outputAssembly (.dll)JavaScript files in dist/ + type definitions (.d.ts)
Internal dependency<ProjectReference>"@myproject/pkg": "workspace:*" in package.json
External dependency<PackageReference>Named dependency in package.json dependencies
Compilation settings<PropertyGroup> in .csprojcompilerOptions in tsconfig.json
Build orchestrationMSBuild (implicit, via solution)Turborepo (explicit, via turbo.json)
Build cacheMSBuild incremental (per-machine)Turbo cache (per-machine or remote)
Namespace rootAssembly name / rootnamespaceNo enforced equivalent; conventions vary
Package visibilitypublic/internal access modifiersNo runtime enforcement; exports field in package.json restricts module resolution

One difference that often surprises .NET engineers: there is no concept of internal visibility in TypeScript at the package level. You can mark members private or protected on classes, but you cannot prevent another package in your monorepo from importing a module that you intend to be “internal” — unless you carefully configure the exports field in package.json to limit which paths are publicly resolvable.


Gotchas for .NET Engineers

1. node_modules exists in multiple places in a monorepo, and this is intentional.

In a pnpm workspace, each package has its own node_modules directory containing symlinks to the actual package files stored in a central content-addressable store. You will also see a node_modules at the root. Do not try to rationalize this based on your NuGet mental model — NuGet has a single global cache and project-local references; pnpm has a similar global store but surfaces local node_modules per package to satisfy Node.js’s module resolution algorithm. The key rule: never manually modify node_modules. Run pnpm install at the repo root and let pnpm manage the structure.

2. There is no single entry point for “build all projects in the right order” unless you configure it.

In .NET, dotnet build MySolution.sln automatically resolves the dependency graph and builds in order. In a JS/TS monorepo, this is not automatic. Without Turborepo (or a similar orchestrator like Nx), running pnpm build in the root will attempt to build all packages in parallel or alphabetical order — which will fail if @myproject/web depends on the built output of @myproject/types. You need Turborepo’s "dependsOn": ["^build"] configuration to get the correct ordering. Many monorepos that look broken to newcomers are broken for exactly this reason: someone ran a build command outside of turbo.

3. tsconfig.json paths aliases require a separate runtime resolution step.

The paths option in tsconfig.json — commonly used to define import aliases like @/ to map to src/ — is a TypeScript compiler feature only. It tells the TypeScript language server and tsc how to resolve imports, but it does not affect the output JavaScript. If you use import { something } from '@/lib/utils' and your runtime is Node.js, the path alias will not be resolved at runtime unless you configure an additional tool to handle it. For NestJS, this means adding path alias configuration to nest-cli.json or using tsconfig-paths at startup. For Next.js, the framework handles this automatically. For a shared package compiled with tsc, you need tsc-alias or similar. This is a common source of “works in the editor, fails at runtime” bugs.

4. The absence of a .sln-equivalent means project discovery is by convention, not registration.

In .NET, a project must be explicitly added to the solution to be part of the build. In a pnpm workspace, any directory matching the glob patterns in pnpm-workspace.yaml is automatically treated as a workspace package. This means a new packages/my-new-lib/ directory with a package.json is immediately part of the workspace on the next pnpm install. There is no registration step. This is convenient but also means orphaned or experimental packages sitting in the right directory are silently included.

5. package.json "main", "module", and "exports" fields control what is actually public.

When one package in your monorepo imports from another, Node.js resolves what gets imported based on the "exports" field in the target package’s package.json. If you set "exports": { ".": "./dist/index.js" }, then only what is exported from dist/index.js is importable — all other paths in the package are opaque to the consumer. Omit exports, and Node.js allows importing any path inside the package. This is the closest you get to .NET’s internal keyword for package-level boundaries.


Hands-On Exercise

The goal is to orient yourself in an unfamiliar monorepo — the same skill you use when joining a team with an existing .sln.

Clone any public monorepo (or use the one you are working in) and complete this walkthrough:

Step 1: Find and read the workspace definition.

cat pnpm-workspace.yaml
# or, if using npm workspaces:
cat package.json | grep -A 10 '"workspaces"'

This tells you how many packages exist and where they live.

Step 2: List all packages and their names.

# Lists every package.json in the workspace, excluding node_modules
find . -name "package.json" \
  -not -path "*/node_modules/*" \
  -not -path "*/.next/*" \
  -not -path "*/dist/*" \
  | sort

For each package.json found, read the "name" field. This is your project’s assembly list.

Step 3: Map the dependency graph.

For each package, check what workspace packages it depends on:

# In the web app's directory:
cat apps/web/package.json | grep "workspace:"

Draw or mentally map the dependency graph. This is the equivalent of reading <ProjectReference> entries.

Step 4: Find the entry points.

For each package, locate:

  • The "main" or "exports" field in package.json — this is the public API
  • The "scripts" field — this tells you how to run, build, and test the package
cat apps/api/package.json | grep -A 10 '"scripts"'

Step 5: Read the tsconfig.json files.

Start with the root tsconfig.base.json if it exists, then read each package’s tsconfig.json to understand its compilation targets, path aliases, and any special settings.

Step 6: Read turbo.json.

Understand the task dependency chain. Answer: what runs before what? What is cached? What always reruns?

After completing these six steps, you should be able to answer: how many packages are in this monorepo, what does each produce, what depends on what, and how do you build and run the whole system.


Quick Reference

You want to….NET commandJS/TS command
Create a new solutiondotnet new sln -n MyAppmkdir my-app && cd my-app && pnpm init
Add a project to solutiondotnet sln add ./src/MyApp.ApiAdd directory to pnpm-workspace.yaml glob
Add a NuGet packagedotnet add package Newtonsoft.Jsonpnpm add zod --filter @myproject/api
Add a project referenceEdit .csproj: <ProjectReference ...>Edit package.json: "@myproject/types": "workspace:*"
Build all projectsdotnet build MySolution.slnpnpm turbo build (from root)
Run a specific projectdotnet run --project src/MyApp.Apipnpm --filter @myproject/api dev
Run all testsdotnet test MySolution.slnpnpm turbo test
Restore dependenciesdotnet restorepnpm install (from root)
List all projectsVisual Studio Solution Explorerfind . -name "package.json" -not -path "*/node_modules/*"
.NET conceptJS/TS equivalent
.sln filepnpm-workspace.yaml + root package.json
.csproj filepackage.json (per package)
<PropertyGroup>tsconfig.json compilerOptions
<PackageReference>dependencies / devDependencies in package.json
<ProjectReference>"@scope/package": "workspace:*"
Assembly (.dll)dist/ directory + .d.ts type definitions
internal access modifierexports field in package.json (path restriction)
MSBuild dependency graphturbo.json "dependsOn": ["^build"]
MSBuild incremental compileTurborepo task caching
dotnet buildturbo build
dotnet testturbo test
dotnet restorepnpm install
Solution-level global.jsonRoot package.json + .nvmrc or .node-version

Further Reading

  • pnpm Workspaces — Official Documentation — The definitive reference for workspace configuration, the workspace: protocol, and filtering.
  • Turborepo Core Concepts — Explains task pipelines, caching, and the dependency graph that replaces MSBuild’s implicit ordering.
  • TypeScript Project References — The TypeScript compiler’s native multi-project support. Less common in the monorepo ecosystem than Turborepo, but worth knowing, especially for library authors.
  • tsconfig.json Reference — Complete reference for every compilerOptions field. Bookmark this; you will use it when debugging type errors that seem to disappear or appear based on which directory you run tsc from.

Build Systems: MSBuild vs. the JS Build Toolchain

For .NET engineers who know: dotnet build, MSBuild, .csproj settings, and the C# compilation pipeline You’ll learn: What actually happens when you build a TypeScript project — the full chain from tsc through bundlers — and which tsconfig.json settings map to what you already configure in .csproj Time: 15-20 min read

The .NET Way (What You Already Know)

When you run dotnet build, MSBuild orchestrates a pipeline that is largely invisible because Microsoft owns the entire chain. The C# compiler (csc or Roslyn) takes your source files, resolves project references from .csproj, compiles to IL (Intermediate Language), and writes a .dll to bin/. The runtime (CLR/CoreCLR) JIT-compiles that IL to native code at execution time. You configure this pipeline through .csproj properties: <TargetFramework>, <Nullable>, <ImplicitUsings>, <LangVersion>, <Optimize>. If you need to see what’s actually happening, dotnet build -v detailed will show you the full MSBuild task graph, but most engineers never need to look.

The key characteristics of this model: one compiler, one build tool, one runtime, one output format. Everything is integrated and controlled by Microsoft. The tradeoff is that customizing the pipeline requires MSBuild expertise that most engineers don’t have.

<!-- The .csproj you know — this drives the entire build -->
<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>net9.0</TargetFramework>
    <Nullable>enable</Nullable>
    <ImplicitUsings>enable</ImplicitUsings>
    <LangVersion>13.0</LangVersion>
    <Optimize>true</Optimize>
    <AssemblyName>MyApp</AssemblyName>
    <RootNamespace>MyApp</RootNamespace>
  </PropertyGroup>
</Project>

MSBuild also handles: dependency resolution (NuGet restore), asset copying, linked files, conditional compilation (#if DEBUG), code generation (source generators), and packaging. It is a general-purpose build engine that happens to be the primary one for .NET.

The JS/TS Way

The JavaScript/TypeScript build story is fragmented because there is no single vendor controlling the chain. Instead, several tools each own one layer:

graph TD
    A["Source (.ts)"]
    B["TypeScript Compiler (tsc)\nType erasure, downleveling,\nmodule transformation"]
    C["JavaScript (.js)"]
    D["Bundler\n(Vite / Webpack / esbuild / Turbopack)\nTree-shaking, code-splitting, minification,\nasset handling, hot module replacement"]
    E["Optimized bundle (.js / .css / assets)"]
    F["Runtime (Node.js / Browser V8)"]

    A --> B --> C --> D --> E --> F

Unlike MSBuild, these tools do not automatically compose. Each one has its own configuration file, its own CLI, its own plugin ecosystem, and its own opinion about how the pipeline should work. The good news: most frameworks (Next.js, NestJS, Vite-based projects) wire the chain together for you and expose only the configuration you actually need to touch.

Step 1: TypeScript Compiler (tsc)

tsc is a type checker and transpiler. It does two distinct things, and it helps to separate them mentally:

Type checkingtsc reads your TypeScript, builds a type graph, and reports errors. This is equivalent to Roslyn’s semantic analysis pass. No output is produced; it just tells you whether your types are correct.

Transpilationtsc strips TypeScript syntax (types, interfaces, decorators-in-some-configurations) and produces plain JavaScript. This is closer to what the C# compiler does when it lowers high-level C# syntax to IL — except TypeScript lowers to JavaScript, not binary code.

Run type checking without emitting files:

# Type-check only (no output files) — equivalent to building to check errors
npx tsc --noEmit

# Compile to JavaScript
npx tsc

# Watch mode — equivalent to dotnet watch build
npx tsc --watch

The key thing to understand: tsc does not optimize. It does not tree-shake, minify, or bundle. It produces one .js file per .ts file, or a concatenated output if configured. For browser applications, raw tsc output is rarely what you ship. You need a bundler.

For NestJS (Node.js backend), tsc output is often sufficient — you run node dist/main.js directly. For Next.js (browser applications), Next.js runs its own build pipeline that uses the TypeScript compiler internally but you never invoke tsc directly for production.

Step 2: Bundlers

Bundlers solve the browser’s module loading problem. A browser cannot natively import 500 separate JavaScript files efficiently (though HTTP/2 mitigates this somewhat), and node_modules references do not work in a browser. A bundler:

  1. Starts from an entry point (main.ts, index.ts)
  2. Follows all import statements recursively, building a dependency graph
  3. Removes dead code that is never imported (tree-shaking)
  4. Splits the graph into chunks to enable lazy loading (code-splitting)
  5. Minifies the output (removes whitespace, renames variables)
  6. Handles non-JS assets (CSS, images, fonts)
  7. Outputs optimized bundles for deployment

There are four bundlers you will encounter:

Webpack — the industry standard from roughly 2015-2022. Highly configurable, enormous plugin ecosystem, complex configuration. If you are joining an existing project, you will likely encounter it. webpack.config.js can grow to several hundred lines of configuration. Vite largely replaced it for new projects.

Vite — the modern default for frontend projects. Built on esbuild (for development speed) and Rollup (for production bundling). Configuration is minimal by design. Nearly all new React, Vue, and Svelte projects use Vite unless they use a meta-framework. Development mode uses native ES modules in the browser with no bundling step, which makes hot module replacement near-instant.

esbuild — written in Go, extremely fast (10-100x faster than Webpack). Used internally by Vite for the dev server transformation step. Can be used standalone for simple bundling scenarios. Less configurable than Webpack but fast enough to make configuration concerns moot for many use cases.

Turbopack — Vercel’s Rust-based bundler, currently used in Next.js development mode. Not production-ready for standalone use yet.

A practical summary: you will probably not choose your bundler directly. If you are building a standalone React or Vue app, use Vite. If you are using Next.js, it handles bundling. If you are using NestJS, you probably do not need a bundler at all — just tsc.

Step 3: Framework Build Pipelines

This is where the complexity is hidden from you:

Next.js runs next build, which internally invokes the TypeScript compiler for type checking, then uses its own bundling pipeline (SWC for transformation, Webpack or Turbopack for bundling) to produce optimized server and client bundles. You rarely configure the bundler directly. Next.js has a next.config.js/next.config.ts for framework-level settings.

NestJS runs nest build (or tsc directly), which compiles TypeScript to JavaScript in the dist/ directory. NestJS’s build is straightforward — it is a Node.js application, so there is no browser bundling required. The nest-cli.json file controls build options.

Vite-based apps (standalone React, Vue) run vite build, which runs TypeScript type checking (via vue-tsc or tsc) and then Rollup-based bundling in one command.

# Next.js project
next build          # type-check + bundle + optimize
next dev            # development server with HMR

# NestJS project
nest build          # tsc compilation to dist/
nest start --watch  # watch mode with auto-restart

# Vite project (standalone React/Vue)
vite build          # type-check + bundle
vite dev            # development server with HMR

tsconfig.json in Depth

tsconfig.json is the closest equivalent to .csproj for TypeScript compilation settings. The mapping is not perfect — tsconfig.json controls the compiler only, not the full build pipeline — but understanding the correspondence makes it immediately familiar.

// tsconfig.json — annotated with .csproj equivalents
{
  "compilerOptions": {
    // --- OUTPUT TARGETING ---

    // What JS version to emit. Equivalent to <TargetFramework>.
    // "ES2022" means use modern JS syntax; "ES5" downlevels to IE-compatible JS.
    // Next.js and NestJS on modern Node.js: use "ES2022" or later.
    "target": "ES2022",

    // Module system for emitted code. Equivalent to choosing assembly output type.
    // "ESNext" or "ES2022" for native ES modules (import/export in output).
    // "CommonJS" for Node.js-compatible require() output.
    // "NodeNext" for modern Node.js with native ESM support.
    // NestJS: "CommonJS". Next.js: handled internally, don't set manually.
    "module": "CommonJS",

    // How imports are resolved. The most important setting for avoiding import errors.
    // "NodeNext" — respects package.json "exports" field, requires .js extensions.
    // "Bundler" — assumes a bundler handles resolution; most permissive, used with Vite.
    // "Node16" — older Node.js-compatible resolution.
    // No .NET equivalent — .NET assembly resolution is implicit.
    "moduleResolution": "NodeNext",

    // Output directory for compiled JS. Equivalent to <OutputPath> or bin/ directory.
    "outDir": "./dist",

    // Root directory of source files. Equivalent to the project root in .csproj.
    "rootDir": "./src",

    // --- TYPE CHECKING ---

    // Enables all strict type checking flags. Equivalent to <Nullable>enable</Nullable>
    // plus Roslyn analyzer warnings. Always enable this — the cost of disabling it
    // is technical debt that compounds fast.
    "strict": true,

    // Stricter property initialization checks (part of strict, but worth knowing).
    // Equivalent to nullable reference types in C#.
    "strictNullChecks": true,

    // Disallow implicit 'any' types. Without this, TypeScript silently widens
    // unresolvable types to 'any', defeating the purpose of the type system.
    "noImplicitAny": true,

    // --- INTEROP ---

    // Allow importing CommonJS modules with ES module syntax.
    // Required when mixing import/require in Node.js projects.
    "esModuleInterop": true,

    // Allow importing JSON files directly. No .NET equivalent.
    "resolveJsonModule": true,

    // Preserve JSX syntax for React (let the bundler handle it) vs. compiling it.
    // "preserve" for Next.js/Vite (they handle JSX transformation).
    // "react-jsx" for projects where tsc handles JSX.
    "jsx": "preserve",

    // --- PATH ALIASES ---

    // Import path aliases. Equivalent to setting namespace aliases or project refs.
    // "@/*" means "anything starting with @/ maps to ./src/*"
    // After configuring this, `import { foo } from '@/lib/foo'` works anywhere.
    // Note: bundlers (Vite, webpack) must also be configured to respect these paths.
    "paths": {
      "@/*": ["./src/*"],
      "@components/*": ["./src/components/*"],
      "@lib/*": ["./src/lib/*"]
    },

    // --- EMIT CONTROL ---

    // Do not emit output files — type-check only. Used in CI for checking without
    // building, or when the bundler handles transpilation.
    // Equivalent to running Roslyn as an analyzer without producing output.
    "noEmit": false,

    // Include type declarations from @types/* packages (e.g., @types/node).
    // Types are loaded automatically from node_modules/@types — no explicit includes needed
    // unless you want to restrict which ones are loaded.
    "types": ["node"]
  },

  // Which files to include. Equivalent to <Compile Include="..." /> in .csproj.
  // Default: all .ts/.tsx files in the project root.
  "include": ["src/**/*"],

  // Which files to exclude. node_modules is excluded by default.
  "exclude": ["node_modules", "dist"]
}

The strict flag deserves special attention. It is a shorthand that enables multiple sub-flags: strictNullChecks, noImplicitAny, strictFunctionTypes, strictBindCallApply, strictPropertyInitialization, noImplicitThis, and alwaysStrict. A project without strict: true is a project where TypeScript cannot be trusted, because the type system has holes large enough to drive a truck through. Treat enabling strict on an existing codebase as a migration task, not a one-line change — you will find dozens of latent type errors.

The moduleResolution setting is where most .NET engineers run into trouble. Here is the practical guide:

SettingUse WhenBehavior
"NodeNext"NestJS, any Node.js project with "type": "module" in package.jsonRequires .js extensions on relative imports even in .ts files. Resolves package.json exports field.
"Node16"Older Node.js projectsSimilar to NodeNext, slightly older semantics
"Bundler"Any project using Vite, Next.js, or webpackNo extension requirements. Assumes the bundler will resolve everything.
"Node"Legacy projectsThe old default. Do not use for new projects.

The target and module Matrix

These two settings interact in ways that confuse .NET engineers because there is no equivalent distinction in .csproj:

  • target controls what JavaScript syntax is emitted. ES2022 emits modern syntax (optional chaining, nullish coalescing, class fields). ES5 downlevels to older syntax that older browsers understand.
  • module controls how imports and exports are emitted. ESNext keeps import/export. CommonJS converts them to require()/module.exports.

These are independent settings, which is why you can have a target: "ES2022" (modern JS syntax) with module: "CommonJS" (Node.js-style imports) — this is exactly what NestJS uses.

For browser apps, modern frameworks handle this for you: Next.js and Vite set their own targets internally. For Node.js apps (NestJS), the typical configuration is:

// tsconfig.json for NestJS (Node.js backend)
{
  "compilerOptions": {
    "module": "CommonJS",
    "target": "ES2022",
    "moduleResolution": "Node",
    "strict": true,
    "esModuleInterop": true,
    "experimentalDecorators": true,
    "emitDecoratorMetadata": true,
    "outDir": "./dist",
    "rootDir": "./src"
  }
}

Note the experimentalDecorators and emitDecoratorMetadata flags — these are required for NestJS’s decorator-based DI system. Without emitDecoratorMetadata, the runtime reflection that NestJS uses to resolve constructor parameter types does not work. See Article 2.5 for the full explanation of decorators.

Multiple tsconfig Files

In a real project, you will often see multiple tsconfig files, not just one. This is equivalent to having separate build configurations (Debug/Release) but more granular:

tsconfig.json           ← Base configuration (shared settings)
tsconfig.build.json     ← Production build (excludes tests)
tsconfig.test.json      ← Test configuration (includes test files)
// tsconfig.build.json — extends base, used for production builds
{
  "extends": "./tsconfig.json",
  "exclude": ["node_modules", "dist", "**/*.spec.ts", "**/*.test.ts"]
}
// tsconfig.test.json — extends base, used by Vitest/Jest
{
  "extends": "./tsconfig.json",
  "include": ["src/**/*", "test/**/*"]
}

The extends keyword works like inheritance — the child config inherits all settings from the parent and overrides only what it specifies. There is no direct .csproj equivalent; the closest is build configuration transforms or the Directory.Build.props shared properties pattern in multi-project solutions.

What Tree-Shaking Actually Means

Tree-shaking is the bundler equivalent of the C# linker removing unused code from single-file executables. It works by static analysis of import/export statements: if you import { Button } from './components' but never use Button, the bundler removes it from the output bundle.

Tree-shaking only works with ES modules (import/export), not CommonJS (require()). This is one reason modern libraries ship as ES modules even when they also provide CommonJS builds.

// components/index.ts — exports many things
export { Button } from './Button';
export { Modal } from './Modal';
export { Table } from './Table';
export { Chart } from './Chart'; // 500KB library

// page.tsx — only imports Button
import { Button } from './components';

// After tree-shaking: Chart is NOT in the bundle.
// The bundler can prove it is never used.

The implication for library authors: side effects in module-level code defeat tree-shaking. Code that registers global state when a module is import-ed prevents the bundler from safely removing it even if nothing from that module is used. Well-designed libraries mark themselves "sideEffects": false in package.json to tell bundlers they are safe to tree-shake.

Code-Splitting

Code-splitting is the bundler feature that splits your application into multiple chunks loaded on demand, rather than one large bundle loaded upfront. The browser equivalent of lazy loading assemblies.

In Next.js, code-splitting is automatic: each page gets its own chunk, and dynamic imports create additional split points:

// next.js — static import (included in initial bundle)
import { HeavyChart } from './HeavyChart';

// next.js — dynamic import (creates a separate chunk, loaded on demand)
import dynamic from 'next/dynamic';
const HeavyChart = dynamic(() => import('./HeavyChart'), {
  loading: () => <p>Loading chart...</p>,
});

In Vite, you use standard dynamic imports:

// Vite — creates a separate chunk loaded when this import is executed
const { HeavyChart } = await import('./HeavyChart');

You do not need to configure code-splitting manually in most frameworks — the tooling handles it. You do need to be aware of it when deciding where to place large dependencies.

Key Differences

Concern.NET / MSBuildJS/TS Toolchain
CompilationRoslyn compiles C# → ILtsc compiles TypeScript → JavaScript (type erasure, not compilation to bytecode)
Build orchestrationMSBuild (integrated)Vite / Webpack / esbuild / Next.js (separate, composed)
Runtime compilationCLR JIT-compiles IL → native at runtimeV8 JIT-compiles JavaScript → native at runtime (similar model)
Build configuration.csproj + MSBuild propertiestsconfig.json + vite.config.ts + next.config.ts (three separate files)
Output.dll (IL bytecode).js files (plain text JavaScript)
Type safety at runtimeTypes enforced by CLRTypes erased at runtime — JavaScript has no types
Tree-shakingIL Linker for AOT/self-containedBuilt into all modern bundlers
Code-splittingN/A (assemblies loaded on demand)Automatic in Next.js/Vite, manual dynamic imports
Hot reloaddotnet watch (full rebuild)HMR via Vite/Next.js (module-level replacement, no full reload)
Strict mode equivalent<Nullable>enable</Nullable> + analyzers"strict": true in tsconfig
Path aliases<PackageReference> / namespace imports"paths" in tsconfig + bundler config
Multiple build configsDebug/Release configurationsMultiple tsconfig files + environment variables
Language version<LangVersion>"target" in tsconfig (controls output JS syntax)

Gotchas for .NET Engineers

1. TypeScript types do not exist at runtime — tsc is not your validator

This is the most important mental model shift in this entire article. In C#, if you declare string Name { get; set; }, the CLR enforces that constraint at runtime. In TypeScript, name: string is erased by tsc and the runtime JavaScript has no knowledge it ever existed.

// This looks like a type-safe function
function processUser(user: { name: string; age: number }) {
  console.log(user.name.toUpperCase());
}

// This compiles without error but throws at runtime
const response = await fetch('/api/user');
const data = await response.json(); // type: any
processUser(data); // No error from tsc — data could be anything

// If the API returns { Name: 'Alice', age: null }, you get:
// TypeError: Cannot read properties of undefined (reading 'toUpperCase')

The solution is runtime validation with Zod at every boundary where external data enters your application (API responses, environment variables, form submissions). See Article 2.3 for the full treatment. The rule of thumb: never trust JSON.parse() or response.json() without validating the shape.

2. Configuring paths in tsconfig is not enough — your bundler also needs them

Path aliases ("paths": { "@/*": ["./src/*"] }) tell tsc how to resolve imports during type checking. They do not tell the bundler (Vite, webpack, Next.js) how to resolve them at build time. If you configure tsconfig.json paths but not the bundler, your app will type-check successfully but fail at runtime.

// This import works for tsc but may fail at bundle time
import { formatDate } from '@/lib/dates';

Each bundler has its own configuration:

// vite.config.ts — must mirror tsconfig paths
import { defineConfig } from 'vite';
import path from 'path';

export default defineConfig({
  resolve: {
    alias: {
      '@': path.resolve(__dirname, './src'),
    },
  },
});
// next.config.ts — Next.js reads tsconfig paths automatically
// No extra configuration needed for Next.js
// webpack.config.js — must mirror tsconfig paths
module.exports = {
  resolve: {
    alias: {
      '@': path.resolve(__dirname, 'src'),
    },
  },
};

Next.js is the exception: it reads tsconfig.json paths and configures webpack/Turbopack automatically. In Vite projects, you must duplicate the configuration.

3. noEmit: true does not mean your build is type-safe

In CI pipelines, a common pattern is to run tsc --noEmit to type-check without producing output. The gotcha: tsc --noEmit only checks files that are included in your tsconfig.json. If your bundler (Vite, Next.js) has a separate mechanism for determining which files to process, there may be files that the bundler builds but tsc --noEmit never sees.

Additionally, tsc --noEmit will not catch errors in files excluded by the exclude field, including test files if they are excluded from the build tsconfig. Run a separate tsc --noEmit using tsconfig.test.json (or your test tsconfig) to catch type errors in test files.

# CI pipeline — type-check production code AND tests
npx tsc --noEmit -p tsconfig.json
npx tsc --noEmit -p tsconfig.test.json

4. experimentalDecorators and emitDecoratorMetadata must be set for NestJS — without them, DI silently fails

NestJS relies on the reflect-metadata package to perform runtime reflection on constructor parameter types. This mechanism requires emitDecoratorMetadata: true in tsconfig, which causes tsc to emit metadata about parameter types alongside the compiled JavaScript. Without it, NestJS cannot determine what types to inject, and you will get cryptic errors at startup — or, worse, undefined injected values.

The TypeScript decorator standard (Stage 3 TC39 proposal) does not support emitDecoratorMetadata. If you are using a project template that has switched to the new decorator standard without experimentalDecorators, NestJS will not work. Always verify both flags are set when setting up a NestJS project.

5. Build output is readable JavaScript — not a security boundary

In .NET, distributing a .dll without source provides some level of obfuscation (though not real security). TypeScript compiles to readable JavaScript. Even with minification, your bundled output is plain text that anyone can read, beautify, and analyze. Source maps — if deployed — make it trivially readable.

Do not put secrets, API keys, or business logic that should remain private in client-side JavaScript. This applies equally to React, Vue, and any browser-targeted code. Server Components in Next.js are one mechanism for running code on the server only; anything in a Client Component runs in the browser and is visible.

6. The module and moduleResolution settings interact in non-obvious ways

If you set "module": "NodeNext" (the modern Node.js ESM-native setting), TypeScript requires that relative imports include the .js extension — even in .ts files:

// With "moduleResolution": "NodeNext" — required
import { foo } from './foo.js'; // .js extension in a .ts file (intentional)

// With "moduleResolution": "Bundler" — both work
import { foo } from './foo';
import { foo } from './foo.js';

The .js extension is correct even though the file on disk is foo.ts. This is because after compilation, the file will be foo.js, and Node.js resolves ESM imports before TypeScript runs. Most .NET engineers find this confusing and try to fix it by removing the extension, which breaks Node.js module resolution. If you are on NestJS and encounter this, check whether moduleResolution is set to NodeNext or Node.

Hands-On Exercise

This exercise builds your intuition for what each layer of the build chain actually does. You will take a simple TypeScript file through each stage manually, then examine how Next.js handles the same process automatically.

Part 1: The raw tsc pipeline

Create a minimal TypeScript project from scratch (no framework):

mkdir build-experiment && cd build-experiment
npm init -y
npm install typescript --save-dev
npx tsc --init

Edit tsconfig.json:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "CommonJS",
    "strict": true,
    "outDir": "./dist",
    "rootDir": "./src"
  },
  "include": ["src/**/*"]
}

Create src/user.ts:

interface User {
  id: number;
  name: string;
  email: string | null;
}

function formatUser(user: User): string {
  return `${user.name} (${user.email ?? 'no email'})`;
}

export { User, formatUser };

Create src/main.ts:

import { User, formatUser } from './user';

const user: User = { id: 1, name: 'Alice', email: null };
console.log(formatUser(user));

Compile and inspect the output:

npx tsc
cat dist/user.js
cat dist/main.js

Observe: the interface User is completely absent from dist/user.js. The type annotation user: User is absent from dist/main.js. What remains is plain JavaScript — identical behavior, zero type information.

Now try breaking the types and see that tsc catches it:

// In src/main.ts — add this broken call
const badUser: User = { id: 'not-a-number', name: 'Bob', email: 'bob@example.com' };
npx tsc --noEmit
# error TS2322: Type 'string' is not assignable to type 'number'.

Part 2: Examine a Next.js build

In an existing Next.js project (or create one with npx create-next-app@latest):

# Examine the tsconfig.json and note: "moduleResolution": "bundler"
cat tsconfig.json

# Run the production build and examine what it produces
npm run build

# Look at what was generated
ls -la .next/
ls -la .next/static/chunks/

The .next/static/chunks/ directory contains code-split bundles. Each chunk is a separate JavaScript file that the browser loads on demand. Compare the file count to your source files — you will have far fewer bundles than source files (bundled) and the bundles will be significantly smaller than raw tsc output (minified).

Part 3: tsconfig path aliases

Add path aliases to the Next.js project’s tsconfig.json and verify they work:

{
  "compilerOptions": {
    "paths": {
      "@/components/*": ["./src/components/*"],
      "@/lib/*": ["./src/lib/*"]
    }
  }
}

Create src/lib/utils.ts:

export function cn(...classes: string[]): string {
  return classes.filter(Boolean).join(' ');
}

Use the alias in a component:

// src/app/page.tsx
import { cn } from '@/lib/utils';

export default function Page() {
  return <div className={cn('text-gray-900', 'font-semibold')}>Hello</div>;
}

Run npm run dev and verify the import resolves. Then run npx tsc --noEmit separately to verify type checking also passes. Both must succeed: the bundler resolves imports at build time, and tsc resolves them during type checking.

Quick Reference

.csproj / MSBuildtsconfig.json EquivalentNotes
<TargetFramework>net9.0</TargetFramework>"target": "ES2022"Controls JS syntax output, not runtime version
<LangVersion>13.0</LangVersion>N/ATypeScript version is controlled by the typescript package version in package.json
<Nullable>enable</Nullable>"strict": true or "strictNullChecks": trueStrict is broader — enables 7 sub-checks
<OutputPath>bin/</OutputPath>"outDir": "./dist"Output directory for compiled JS
<RootNamespace>MyApp</RootNamespace>N/ATypeScript uses file-based modules, not namespaces
<Optimize>true</Optimize>Bundler config (Vite/webpack), not tsconfigMinification is a bundler concern, not tsc
<Compile Include="..." />"include": ["src/**/*"]File inclusion patterns
<Compile Remove="..." />"exclude": [...]File exclusion patterns
Debug/Release configurationsMultiple tsconfig files + NODE_ENVNo built-in concept; by convention
Project References <ProjectReference>"references": [...] in tsconfig + pathsTypeScript project references for monorepos
dotnet buildtsc + bundler (framework-specific)Often npm run build invokes both
dotnet build -v detailedtsc --listFiles --diagnosticsVerbose compilation output
dotnet watch buildtsc --watch or vite dev / nest start --watchWatch mode
Bundler commandEquivalent
next buildFull Next.js production build (type-check + bundle + optimize)
next devDevelopment server with HMR (hot module replacement)
nest buildNestJS tsc compilation to dist/
nest start --watchNestJS watch mode with auto-restart
vite buildVite production bundle
vite devVite development server
tsc --noEmitType-check only, no output (use in CI)
tsc --watchWatch mode, rebuild on changes
tsconfig optionWhat it controlsRecommendation
strictEnables all strictness sub-flagsAlways true
targetJavaScript syntax in emitted outputES2022 for Node.js apps; let the framework set it for browser apps
moduleImport/export syntax in emitted outputCommonJS for NestJS; ESNext or let the framework decide for browser
moduleResolutionHow import paths are resolvedNodeNext for modern Node.js; Bundler for Vite/Next.js
pathsImport path aliases (@/)Set in tsconfig; also configure in bundler (except Next.js)
noEmitSkip file output, type-check onlyUse in CI pipelines
experimentalDecoratorsLegacy decorator supportRequired for NestJS
emitDecoratorMetadataEmit constructor parameter metadataRequired for NestJS DI
esModuleInteropAllow import X from 'cjs-module' syntaxAlways true
resolveJsonModuleAllow import config from './config.json'true when needed

Further Reading

  • TypeScript Compiler Options Reference — The canonical documentation for every tsconfig option. Useful as a reference; do not read linearly.
  • Vite Documentation — Getting Started — Covers the development model, configuration, and production build behavior. If you are working on any Vite-based project, read the “Features” and “Build” sections.
  • Next.js — TypeScript Configuration — Covers how Next.js reads and extends tsconfig, incremental type checking, and next build behavior.
  • esbuild Documentation — If you need to understand why modern JS builds are fast, or if you need to write a custom build script, esbuild’s documentation explains the core concepts well even if you are not using it directly.

1.6 — The JavaScript Library Landscape: A .NET Engineer’s Decoder Ring

For .NET engineers who know: ASP.NET Core — middleware pipelines, controllers, DI, filters, model binding, and the cohesive “one framework does everything” experience. You’ll learn: How the JS/TS ecosystem splits what ASP.NET Core does into separate, composable libraries — and how to map every architectural concept you already know to its TypeScript equivalent. Time: 25-30 minutes


The most disorienting thing about the JavaScript ecosystem is not the syntax. It’s not TypeScript’s type system. It’s the realization that there is no single framework you install and configure. Instead, you’re assembling an architecture from components, and nothing tells you which components belong together.

This article is your decoder ring. By the end, you’ll know exactly how every layer of a TypeScript application maps to what you already know from ASP.NET Core. You’ll understand why the ecosystem is fragmented this way, which libraries dominate each layer, and how to read an unfamiliar JS/TS project and orient yourself within thirty seconds.


The .NET Way (What You Already Know)

When you start a new ASP.NET Core project, you get a framework that handles the entire middle tier from a single dependency. One dotnet new webapi command and you have:

  • Routing — via [Route], [HttpGet], and MapControllers()
  • Middleware — via app.Use(), app.UseAuthentication(), app.UseAuthorization()
  • Dependency injection — via IServiceCollection with AddScoped, AddSingleton, AddTransient
  • Model binding — via [FromBody], [FromQuery], [FromRoute]
  • Validation — via Data Annotations or FluentValidation
  • Auth — via ASP.NET Identity, JWT bearer middleware, and [Authorize]
  • Configuration — via IConfiguration, appsettings.json, and environment-based overrides
  • Logging — via ILogger<T> with pluggable sinks
  • Background services — via IHostedService and BackgroundService

Microsoft designed all of these to work together. They share conventions, they integrate with the same DI container, and they evolve together under a single release cadence. When you add app.UseAuthentication() before app.UseAuthorization(), the framework enforces that ordering. When your controller has a [Authorize] attribute, it integrates with the same auth middleware you configured in Program.cs. The system is opinionated by design.

// Program.cs — everything in one place
var builder = WebApplication.CreateBuilder(args);

builder.Services.AddControllers();
builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
    .AddJwtBearer(options => { /* config */ });
builder.Services.AddScoped<IUserService, UserService>();
builder.Services.AddDbContext<AppDbContext>(options =>
    options.UseSqlServer(builder.Configuration.GetConnectionString("DefaultConnection")));

var app = builder.Build();

app.UseAuthentication();
app.UseAuthorization();
app.MapControllers();

app.Run();

Twelve lines of setup, and your middleware pipeline, DI container, ORM, and auth system are wired together. That’s the ASP.NET Core value proposition: cohesion.

The JavaScript ecosystem makes a different trade. Instead of cohesion, it gives you composability. Everything is separate. That fragmentation is not an accident — it reflects the ecosystem’s history and values. Understanding why the ecosystem is structured this way is the first step to navigating it confidently.


The JavaScript/TypeScript Way

The Fundamental Mental Shift: Libraries, Frameworks, and Meta-Frameworks

Before mapping individual tools, you need to understand a three-way distinction that barely exists in the .NET world but is critical in JavaScript:

TermWhat It Means.NET Analog
LibraryDoes one thing. No opinions about your structure. You call it.NuGet packages like Newtonsoft.Json
FrameworkHas opinions. Calls your code via its conventions.ASP.NET Core (the pattern where the framework calls your controllers)
Meta-frameworkBuilds on a library/framework to add routing, SSR, build tooling, and full-stack conventionsNo direct analog — closest is ASP.NET Core MVC on top of ASP.NET Core

React is a library. Next.js is a meta-framework built on React. The distinction matters because React doesn’t do routing, server rendering, or build optimization. Next.js does all of those things by wrapping React with additional conventions and tooling.

This pattern repeats across the ecosystem: Vue is a library (framework, technically), Nuxt is its meta-framework. That extra layer is where you get the ASP.NET Core-like experience.


Frontend Frameworks

These are the libraries/frameworks responsible for building UIs. If your team is building a web frontend (as opposed to a pure API), you’re choosing one of these.

React

React is a component library built by Meta, released in 2013 and still dominant in 2026. The key architectural decisions that define React:

One-way data flow. Data flows down through props (component parameters), and events flow up through callbacks. There is no two-way binding by default. This is the opposite of how WPF/MAUI bindings work and how Blazor’s two-way binding (@bind) works.

JSX. React components return JSX — a syntax extension where HTML-like markup lives inside TypeScript files. If Razor syntax is .cshtml (C# inside HTML), JSX is the inverse: HTML inside TypeScript.

// React component — TypeScript (.tsx file)
// Think of this as a Razor Component but in TypeScript

interface UserCardProps {
  name: string;
  email: string;
  role: "admin" | "user";
}

// Functional component — the modern React pattern (class components are legacy)
function UserCard({ name, email, role }: UserCardProps) {
  return (
    <div className="user-card">
      <h2>{name}</h2>
      <p>{email}</p>
      {role === "admin" && <span className="badge">Administrator</span>}
    </div>
  );
}
// Rough equivalent in Blazor — notice the inverted syntax relationship
// Blazor: C# code in .razor files | React: HTML in .tsx files

@* UserCard.razor *@
@code {
    [Parameter] public string Name { get; set; }
    [Parameter] public string Email { get; set; }
    [Parameter] public string Role { get; set; }
}

<div class="user-card">
    <h2>@Name</h2>
    <p>@Email</p>
    @if (Role == "admin")
    {
        <span class="badge">Administrator</span>
    }
</div>

Hooks. State and side effects in React components are managed through “hooks” — functions that start with use. useState manages local state (like a [Parameter] that the component itself can modify). useEffect handles side effects like data fetching (like OnInitializedAsync and Dispose combined). See Article 3.2 for a full treatment of hooks.

React is not a framework. React handles one thing: rendering UI. It has no router, no HTTP client, no form validation, no state management beyond component-local state. Everything else requires additional libraries. This is why you almost never use React alone — you use it through Next.js, which layers those capabilities on top.

Vue 3

Vue 3 is a progressive framework — more opinionated than React, less opinionated than Angular. It’s the closest thing in the JS world to Blazor’s mental model.

Single File Components (SFC). Vue components live in .vue files with <template>, <script>, and <style> sections. The separation mirrors the Blazor .razor component structure more closely than React’s JSX.

Reactivity system. Vue’s reactivity is built in. When you declare const count = ref(0), Vue automatically tracks when count is read and triggers a re-render when it changes. This is closer to WPF’s INotifyPropertyChanged or Blazor’s auto-re-render behavior than React’s explicit state updates.

Composition API. Vue 3 introduced the Composition API — a way to organize component logic by concern rather than by lifecycle stage (a significant improvement over Vue 2’s Options API). The <script setup> syntax is the modern preferred form:

<!-- UserCard.vue — Vue 3 Single File Component -->
<script setup lang="ts">
// Imports are reactive; no class or export needed with <script setup>
const props = defineProps<{
  name: string;
  email: string;
  role: "admin" | "user";
}>();

// computed is like a C# calculated property — auto-updates when deps change
const displayName = computed(() => `${props.name} (${props.role})`);
</script>

<template>
  <div class="user-card">
    <h2>{{ displayName }}</h2>
    <p>{{ props.email }}</p>
    <span v-if="props.role === 'admin'" class="badge">Administrator</span>
  </div>
</template>

<style scoped>
/* scoped CSS — like Blazor's Component.razor.css isolation */
.user-card { padding: 1rem; }
</style>

Vue is our framework of choice for projects that use the Nuxt meta-framework. See Article 3.3 for a full treatment of Vue 3.

Angular

Angular deserves a section even though we don’t use it, because you will encounter it. Angular is the full-framework approach — closest to ASP.NET Core in philosophy, built by Google, and the dominant choice in large enterprise organizations.

Angular is the only frontend framework that ships with:

  • Its own DI container (constructor injection, like ASP.NET Core)
  • Its own HTTP client (HttpClient — even named the same)
  • Its own routing
  • Its own form management (reactive forms, template-driven forms)
  • TypeScript as a first-class, non-optional requirement since day one

The module system (NgModule) maps most directly to ASP.NET Core’s service registration model — you declare providers, imports, and exports per module. The component decorator system (@Component(), @Injectable(), @NgModule()) maps directly to ASP.NET Core’s attribute-driven design.

// Angular component — notice how similar the decorator pattern feels to C# attributes
@Component({
  selector: 'app-user-card',
  template: `
    <div class="user-card">
      <h2>{{ displayName }}</h2>
      <p>{{ user.email }}</p>
    </div>
  `
})
export class UserCardComponent {
  @Input() user: User;  // [Parameter] equivalent

  // Constructor injection — identical mental model to ASP.NET Core
  constructor(private userService: UserService) {}

  get displayName(): string {
    return `${this.user.name} (${this.user.role})`;
  }
}

If Angular feels familiar, that’s intentional — the Angular team drew heavily from the ASP.NET MVC pattern. The trade-off is verbosity and a steep initial learning curve. When reading an Angular codebase, your .NET instincts will serve you better there than anywhere else in the JS ecosystem.

Svelte and SolidJS

These two frameworks take a fundamentally different approach: they compile away the framework at build time.

Svelte compiles components into vanilla JavaScript — there is no virtual DOM and no framework runtime shipped to the browser. The compiled output is smaller and often faster than React or Vue at runtime. The syntax is distinctive and worth recognizing.

SolidJS uses a reactive primitive system (similar to Vue’s reactivity) but, like Svelte, compiles to efficient vanilla JS without a virtual DOM. It claims top-of-chart benchmark performance.

Both are mature enough for production use in 2026. Neither has the ecosystem depth or community size of React or Vue. You’ll encounter them in greenfield projects where performance and bundle size are priorities. For this curriculum, they’re awareness-level.


Meta-Frameworks: Where the Real ASP.NET Comparison Lives

Here’s the insight that orients everything: when .NET engineers ask “what’s the equivalent of ASP.NET Core in JavaScript?”, the answer is not React or Vue — it’s the meta-framework layer. React and Vue are rendering libraries; Next.js and Nuxt are the frameworks that add the server-side layer that makes them comparable to ASP.NET.

Next.js

Next.js is a React meta-framework built by Vercel. It’s the closest thing to ASP.NET MVC + Razor Pages in the React world. What Next.js adds on top of React:

  • File-based routing. Create app/users/[id]/page.tsx and you have a route at /users/:id. No route registration, no [Route] attributes. The file system is the router — similar to Razor Pages conventions where Pages/Users/Detail.cshtml maps to /Users/Detail.

  • Server-side rendering (SSR). Components can render on the server before sending HTML to the browser. This is the model Razor Pages uses — server renders HTML, browser displays it.

  • Server Components. A newer paradigm where components marked as server-only never ship JavaScript to the browser. They fetch data directly (no API call needed) and render to HTML. Think of them as Razor Pages with a component model.

  • API routes. Create app/api/users/route.ts and you have a backend endpoint. These are full HTTP request handlers — equivalent to ASP.NET Minimal API endpoints. For projects where the same Next.js app needs a lightweight API, this eliminates a separate NestJS deployment.

  • Middleware. Next.js has its own middleware layer (also called middleware) that runs at the edge before request routing, similar to ASP.NET Core’s middleware pipeline.

The App Router (the current architecture, introduced in Next.js 13 and stable since Next.js 14) uses a file/folder convention for routing that .NET engineers find navigable:

app/
├── layout.tsx          ← Root layout (like _Layout.cshtml)
├── page.tsx            ← Homepage route "/"
├── users/
│   ├── page.tsx        ← Route "/users"
│   └── [id]/
│       ├── page.tsx    ← Route "/users/:id" (Server Component)
│       └── edit/
│           └── page.tsx ← Route "/users/:id/edit"
└── api/
    └── users/
        ├── route.ts    ← GET/POST "/api/users"
        └── [id]/
            └── route.ts ← GET/PUT/DELETE "/api/users/:id"

Nuxt

Nuxt is to Vue what Next.js is to React. It wraps Vue with the same capabilities — SSR, file-based routing, API routes, middleware — with a Vue-flavored convention system. If anything, Nuxt is slightly more opinionated and convention-driven than Next.js (closer to the “convention over configuration” philosophy of classic ASP.NET MVC).

Nuxt’s auto-import system is distinctive: components placed in components/ and composables placed in composables/ are automatically available everywhere without explicit imports. This is more opinionated than Next.js and either feels magical or like hidden complexity depending on your preference.

For teams using Vue, Nuxt is the natural full-stack choice. See Article 3.5 for depth on Nuxt.

Remix

Remix is an alternative React meta-framework, now part of the React Router project. Its architecture is closer to the traditional HTTP request/response model than Next.js.

Where Next.js gives you Server Components (a React-level concept), Remix keeps the abstraction at the HTTP layer — every route has a loader function for GET requests and an action function for POST/PUT/DELETE requests. This maps more naturally to the MVC action pattern:

// Remix route — maps very closely to an ASP.NET Controller action
export async function loader({ params }: LoaderFunctionArgs) {
  // This is like a controller GET action — runs on the server
  const user = await getUserById(params.id);
  return json(user);
}

export async function action({ request, params }: ActionFunctionArgs) {
  // This is like a controller POST/PUT action
  const formData = await request.formData();
  await updateUser(params.id, formData);
  return redirect(`/users/${params.id}`);
}

// The component just renders — no data fetching here
export default function UserPage() {
  const user = useLoaderData<typeof loader>();
  return <UserForm user={user} />;
}

If you find Next.js’s Server Components model confusing (and many do initially), Remix’s request/response model will feel more intuitive. We don’t use Remix as our primary framework, but knowing its pattern helps read existing codebases.


Backend Frameworks: The ASP.NET Core Equivalents

These are pure server-side Node.js frameworks — they have no UI concerns. If you’re building an API, a background service, or a data layer that only runs on the server, this is the layer you’re in.

NestJS — The Direct ASP.NET Core Equivalent

NestJS is the framework we use for dedicated backend APIs. It is the closest architectural equivalent to ASP.NET Core in the JS ecosystem. Before we dive into each other part of the ecosystem, understand this mapping:

ASP.NET CoreNestJSWhat It Does
[ApiController]@Controller()Marks a class as an HTTP handler
[HttpGet("/users")]@Get("/users")Maps a method to an HTTP route
[Route("[controller]")]@Controller("users")Sets the controller’s base path
[Authorize]@UseGuards(AuthGuard)Protects an endpoint
IServiceCollection.AddScoped<T>()providers: [UserService] in @Module()Registers a scoped service
[Inject] / constructor DIConstructor injectionResolves services into constructors
IActionFilterInterceptorRuns logic before/after handler
IAuthorizationFilterGuardRuns authorization before handler
IModelValidatorPipeTransforms/validates input before handler
IExceptionFilterExceptionFilterCatches and handles thrown exceptions

NestJS uses Express or Fastify as its HTTP engine (configurable). Think of Express/Fastify as the Kestrel equivalent — the raw HTTP server that NestJS sits on top of. You almost never interact with Express/Fastify directly in a NestJS project; NestJS abstracts it the same way ASP.NET Core abstracts Kestrel.

The module system is where NestJS diverges most from .NET in structure. NestJS makes DI explicitly module-scoped:

// users.module.ts — NestJS module
// Compare to a well-organized IServiceCollection extension method in .NET

@Module({
  imports: [DatabaseModule],    // ← like builder.Services.AddDbContext()
  controllers: [UsersController],
  providers: [UsersService],    // ← services this module owns
  exports: [UsersService],      // ← services other modules can inject
  // Without exports, UsersService is PRIVATE to this module
  // ASP.NET Core has no equivalent — all registered services are globally available
})
export class UsersModule {}

The exports concept has no direct .NET equivalent — in ASP.NET Core, any registered service is accessible anywhere. NestJS’s explicit exports create stronger module boundaries. This is actually stricter than .NET’s default behavior and worth embracing.

Express.js

Express is the foundational Node.js HTTP framework — minimal by design. No routing conventions, no DI, no model binding. Just middleware functions and route handlers:

// Express — the "raw Kestrel" of Node.js
const app = express();

app.use(express.json());  // ← like app.UseEndpoints() + body parsing

app.get('/users/:id', async (req, res) => {
  const user = await db.findUser(req.params.id);
  res.json(user);
});

app.listen(3000);

Express gives you nothing by default. No validation, no DI, no auth, no logging — you assemble those yourself from middleware packages. If ASP.NET Core out-of-the-box is a furnished apartment, Express is an empty room.

We don’t use Express standalone. It runs underneath NestJS (NestJS’s default adapter). If you’re reading a codebase that uses Express without NestJS, it’s either a very old project or a microservice where minimal overhead was a priority.

Fastify

Fastify is a performance-focused alternative to Express. The API is similar but with different serialization semantics (JSON serialization is explicitly declared and compiled ahead of time, which is faster than Express’s dynamic serialization). Fastify can be swapped in under NestJS instead of Express for performance-sensitive deployments:

// NestJS with Fastify adapter instead of Express
const app = await NestFactory.create<NestFastifyApplication>(
  AppModule,
  new FastifyAdapter({ logger: true })
);

For most applications, the difference between Express and Fastify under NestJS is not measurable in real-world conditions. Consider Fastify when you have benchmarked a specific bottleneck.

Hono

Hono is an ultra-lightweight framework that runs anywhere: Node.js, Deno, Cloudflare Workers, Bun, AWS Lambda. Its defining feature is portability — the same Hono application can run at the edge, in a serverless function, or in a traditional Node.js process with no code changes.

// Hono — extremely minimal, edge-first
import { Hono } from 'hono'

const app = new Hono()

app.get('/users/:id', async (c) => {
  const id = c.req.param('id')
  return c.json({ id, name: 'Chris' })
})

export default app  // works on any runtime

Hono is relevant for edge functions (Next.js middleware, Cloudflare Workers, API routes with minimal cold start requirements). It’s not a NestJS replacement for a full API — it’s a specialized tool for edge-specific workloads.


The Middle-Tier Architecture in Detail

This is the core comparison. Every row maps an ASP.NET Core concern to its NestJS equivalent, because NestJS is your primary backend framework in this stack.

ConcernASP.NET CoreNestJS Equivalent
Request PipelineMiddleware + FiltersMiddleware + Guards + Interceptors + Pipes
Routing[Route] / [HttpGet] attributes@Controller() / @Get() decorators
DI ContainerIServiceCollection / IServiceProvider@Module() providers / exports
DI LifetimeAddScoped / AddSingleton / AddTransientDEFAULT (singleton) / REQUEST / TRANSIENT scope
Model Binding[FromBody], [FromQuery], [FromRoute]@Body(), @Query(), @Param() decorators
ValidationData Annotations / FluentValidationclass-validator + ValidationPipe / Zod + custom pipe
AuthASP.NET Identity / [Authorize] / JWT BearerPassport.js / Guards / Clerk JWT validation
ConfigIConfiguration / appsettings.jsonConfigModule / .env + @nestjs/config
LoggingILogger<T> / SerilogBuilt-in Logger / Pino / Winston
Background JobsIHostedService / HangfireBull/BullMQ queues / @Cron()
Real-timeSignalRSocket.io / @WebSocketGateway()
Response Caching[ResponseCache] / IMemoryCacheCacheInterceptor / @nestjs/cache-manager
Exception HandlingException filters / ProblemDetailsExceptionFilter / HttpException
API DocumentationSwashbuckle / NSwag@nestjs/swagger
Health ChecksIHealthCheck / AddHealthChecks()@nestjs/terminus

One important difference: NestJS’s request pipeline stages are more explicitly named and more granular than ASP.NET’s filter types. The execution order in NestJS is:

graph TD
    subgraph NestJS["NestJS Pipeline"]
        N1["Incoming Request"]
        N2["Middleware\n← like ASP.NET middleware (app.Use())"]
        N3["Guards\n← like Authorization filters [Authorize]"]
        N4["Interceptors (pre)\n← like Action filters (OnActionExecuting)"]
        N5["Pipes\n← like model binding + validation"]
        N6["Route Handler\n← like your controller action"]
        N7["Interceptors (post)\n← like Action filters (OnActionExecuted)"]
        N8["Exception Filters\n← like exception middleware"]
        N9["Response"]
        N1 --> N2 --> N3 --> N4 --> N5 --> N6 --> N7 --> N8 --> N9
    end

    subgraph ASP["ASP.NET Core Pipeline"]
        A1["Incoming Request"]
        A2["Middleware (authentication, CORS, etc.)"]
        A3["Routing"]
        A4["Authorization (IAuthorizationFilter)"]
        A5["Action Filter (IActionFilter.OnActionExecuting)"]
        A6["Model Binding + Validation"]
        A7["Controller Action"]
        A8["Action Filter (IActionFilter.OnActionExecuted)"]
        A9["Exception Filter (IExceptionFilter)"]
        A10["Result Filter"]
        A11["Response"]
        A1 --> A2 --> A3 --> A4 --> A5 --> A6 --> A7 --> A8 --> A9 --> A10 --> A11
    end

The pipelines are nearly isomorphic. The main structural difference: ASP.NET Core’s filter chain is part of the MVC layer (it doesn’t run for middleware-handled requests), while NestJS’s guards/interceptors/pipes are defined at the controller/handler level and integrate with the underlying HTTP layer more directly.


Side-by-Side: The Same REST Endpoint in ASP.NET Core and NestJS

The most direct way to internalize the mapping is to see the same complete, production-ready endpoint written in both frameworks. Here’s a GET /api/users/:id endpoint with authentication, validation, error handling, and logging:

ASP.NET Core Version

// Controllers/UsersController.cs
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;

namespace MyApp.Controllers;

[ApiController]                           // ← Enables model binding, auto 400 on validation fail
[Route("api/[controller]")]               // ← Base route: "api/users"
[Authorize]                               // ← Requires authenticated user (JWT checked by middleware)
public class UsersController : ControllerBase
{
    private readonly IUserService _userService;
    private readonly ILogger<UsersController> _logger;

    // Constructor injection — DI container resolves these
    public UsersController(IUserService userService, ILogger<UsersController> logger)
    {
        _userService = userService;
        _logger = logger;
    }

    [HttpGet("{id}")]                     // ← Route: GET api/users/{id}
    [ProducesResponseType(typeof(UserDto), 200)]
    [ProducesResponseType(404)]
    public async Task<IActionResult> GetUser(
        [FromRoute] Guid id)              // ← Model binding: reads from URL path
    {
        _logger.LogInformation("Fetching user {UserId}", id);

        var user = await _userService.GetByIdAsync(id);

        if (user is null)
        {
            return NotFound(new ProblemDetails
            {
                Title = "User not found",
                Status = 404
            });
        }

        return Ok(user);
    }
}
// Services/UserService.cs
public class UserService : IUserService
{
    private readonly AppDbContext _db;

    public UserService(AppDbContext db) => _db = db;

    public async Task<UserDto?> GetByIdAsync(Guid id)
    {
        var user = await _db.Users
            .Where(u => u.Id == id && !u.IsDeleted)
            .Select(u => new UserDto(u.Id, u.Name, u.Email, u.Role))
            .FirstOrDefaultAsync();

        return user;
    }
}
// Program.cs — registration
builder.Services.AddScoped<IUserService, UserService>();
builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
    .AddJwtBearer();

NestJS Version

// users/users.controller.ts
import {
  Controller,
  Get,
  Param,
  ParseUUIDPipe,        // ← Equivalent of [FromRoute] + UUID validation
  NotFoundException,
  UseGuards,
  Logger,
} from '@nestjs/common';
import { ApiBearerAuth, ApiTags } from '@nestjs/swagger';  // ← Swashbuckle equivalent
import { JwtAuthGuard } from '../auth/jwt-auth.guard';     // ← [Authorize] equivalent
import { UsersService } from './users.service';

@ApiTags('users')                         // ← Swagger grouping (like [ApiExplorerSettings])
@ApiBearerAuth()                          // ← Documents the JWT requirement in Swagger
@Controller('users')                      // ← [Route("users")] equivalent
@UseGuards(JwtAuthGuard)                  // ← [Authorize] equivalent — applies to all methods
export class UsersController {
  // NestJS Logger is injectable like ILogger<T>, but often used as a static class property
  private readonly logger = new Logger(UsersController.name);

  // Constructor injection — identical mental model to ASP.NET Core
  constructor(private readonly usersService: UsersService) {}

  @Get(':id')                             // ← [HttpGet("{id}")] equivalent
  async findOne(
    @Param('id', ParseUUIDPipe) id: string  // ← [FromRoute] + UUID validation in one decorator
  ) {
    this.logger.log(`Fetching user ${id}`);

    const user = await this.usersService.findById(id);

    if (!user) {
      // NestJS HttpException hierarchy — like ProblemDetails in ASP.NET
      throw new NotFoundException(`User ${id} not found`);
    }

    return user;  // NestJS auto-serializes to JSON (like return Ok(user))
  }
}
// users/users.service.ts
import { Injectable } from '@nestjs/common';
import { PrismaService } from '../prisma/prisma.service';

@Injectable()                             // ← [Service] + IServiceCollection registration signal
export class UsersService {
  constructor(private readonly prisma: PrismaService) {}

  async findById(id: string) {
    // Prisma query — equivalent of LINQ FirstOrDefaultAsync
    return this.prisma.user.findFirst({
      where: {
        id,
        isDeleted: false,
      },
      select: {
        id: true,
        name: true,
        email: true,
        role: true,
      },
    });
    // Returns null if not found — like EF's FirstOrDefaultAsync returning null
  }
}
// users/users.module.ts — the registration equivalent of Program.cs
import { Module } from '@nestjs/common';
import { UsersController } from './users.controller';
import { UsersService } from './users.service';
import { PrismaModule } from '../prisma/prisma.module';

@Module({
  imports: [PrismaModule],                // ← Like builder.Services.AddDbContext()
  controllers: [UsersController],         // ← Controller registration
  providers: [UsersService],             // ← Like builder.Services.AddScoped<UsersService>()
  exports: [UsersService],               // ← Makes UsersService available to other modules
})
export class UsersModule {}
// app.module.ts — the root module, like the top of Program.cs
import { Module } from '@nestjs/common';
import { ConfigModule } from '@nestjs/config';
import { UsersModule } from './users/users.module';

@Module({
  imports: [
    ConfigModule.forRoot({ isGlobal: true }),  // ← IConfiguration global registration
    UsersModule,
  ],
})
export class AppModule {}
// main.ts — application bootstrap, equivalent to Program.cs entry point
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import { ValidationPipe } from '@nestjs/common';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);

  app.setGlobalPrefix('api');              // ← Like MapControllers() with route prefix

  app.useGlobalPipes(                      // ← Global validation — like AddFluentValidation()
    new ValidationPipe({
      whitelist: true,                     // ← Strip unknown properties (like [BindNever])
      transform: true,                     // ← Auto-coerce types (like model binding)
    })
  );

  await app.listen(3000);
}

bootstrap();

What to notice:

  1. The decorator pattern (@Controller(), @Get(), @UseGuards()) is syntactically identical to C# attributes, but decorators are functions that execute at class definition time — not metadata read at runtime. Article 2.5 covers this distinction in depth.

  2. NestJS’s NotFoundException maps to ASP.NET’s return NotFound() / ProblemDetails. NestJS has a full HttpException hierarchy: BadRequestException, UnauthorizedException, ForbiddenException, NotFoundException, ConflictException, and so on.

  3. The module (UsersModule) is where .NET’s IServiceCollection registration happens in NestJS. There is no global Program.cs listing all services — each module registers its own providers and explicitly exports what other modules can use.

  4. ParseUUIDPipe combines two ASP.NET concepts: model binding ([FromRoute]) and validation ([RegularExpression] or a custom type converter). Pipes in NestJS transform and validate simultaneously.


Key Differences

DimensionASP.NET CoreNestJS / JS Ecosystem
CohesionSingle framework, everything integratedComposed from separate packages
Service scope defaultScoped (per-request) is explicitSingleton is the NestJS default — be explicit about scope
Module visibilityAll registered services are globally availableServices are module-private unless explicitly exported
Decorator vs. AttributeAttributes are metadata (passive, read by reflection)Decorators are functions (active, execute at definition)
Validation triggerModel binding runs validation automatically with [ApiController]Must opt into ValidationPipe globally or per-route
Error response formatProblemDetails (RFC 7807) built inHttpException with message string — must configure for ProblemDetails format
Framework maturity10+ years, stable API, strong migration toolingNestJS is ~7 years old, still evolving (some breaking changes between majors)
Configuration modelLayered providers, environment hierarchy.env files, explicit ConfigModule, process.env
Frontend couplingMVC views, Razor Pages, or completely separateNatural continuum from API-only to Next.js full-stack
Learning curveSteeper initially, very consistent thereafterEasier entry point, more ecosystem navigation required

Gotchas for .NET Engineers

1. NestJS Services Are Singleton by Default — This Will Bite You

In ASP.NET Core, the default assumption for services added with AddScoped<T>() is that a new instance is created per HTTP request. State on a scoped service is safe because it’s isolated per request.

In NestJS, @Injectable() services are singletons by default (the DEFAULT scope). This means a property you set on a service instance during one request will be visible to other concurrent requests. This is equivalent to accidentally calling AddSingleton<T>() for everything in ASP.NET Core.

// THIS IS A BUG — service is singleton, currentUser is shared across requests
@Injectable()
export class UsersService {
  private currentUser: User;  // ← DANGER: shared across all requests

  async processRequest(userId: string) {
    this.currentUser = await this.findById(userId);  // ← race condition
    return this.doSomething();
  }
}

// CORRECT — use request-scoped injection when you need per-request state
@Injectable({ scope: Scope.REQUEST })
export class UsersService {
  private currentUser: User;  // ← now safe — new instance per request
  // ...
}

For stateless services (the common case — query a database, return a result), singleton scope is fine and more efficient. The gotcha is when you store state on a service property. In ASP.NET Core, you’d get a warning-level code smell; in NestJS, it’s a silent race condition.

2. There Is No Global Service Registration — Module Boundaries Are Enforced

If you add a service in AuthModule and try to inject it in UsersController, you’ll get a NestJS injection error at startup: Nest can't resolve dependencies of the UsersController. The service is not visible outside its module unless explicitly exported.

// auth.module.ts — JwtService is NOT available to other modules by default
@Module({
  providers: [JwtService],
  // Missing: exports: [JwtService]
})
export class AuthModule {}

// users.module.ts — this will cause a runtime error at startup
@Module({
  imports: [AuthModule],          // ← importing AuthModule isn't enough
  controllers: [UsersController],
})
export class UsersModule {}

// UsersController will fail to start because JwtService isn't exported:
// ERROR: Nest can't resolve dependencies of the UsersController (?).
// Fix: explicitly export what other modules need
@Module({
  providers: [JwtService],
  exports: [JwtService],          // ← now JwtService is available to any module that imports AuthModule
})
export class AuthModule {}

The ASP.NET Core reflex is to just register a service in Program.cs and have it available everywhere. That reflex will produce confusing startup errors in NestJS until you internalize the explicit export model.

3. Validation Does Not Run Unless You Wire It Up

In ASP.NET Core with [ApiController], model binding and Data Annotation validation are automatic. You get a 400 response with validation details without writing a single line of validation code in your action.

In NestJS, validation requires:

  1. A global ValidationPipe configured in main.ts (or per-controller/per-route)
  2. class-validator decorators on your DTO class
  3. The class-transformer package for type coercion

Without all three, a DTO with @IsEmail() annotations will silently accept any string:

// dto/create-user.dto.ts
import { IsString, IsEmail, MinLength } from 'class-validator';

export class CreateUserDto {
  @IsString()
  @MinLength(2)
  name: string;

  @IsEmail()
  email: string;
}

// ← Decorators alone do NOTHING without ValidationPipe
// main.ts — without this, the DTO decorators above are ignored
app.useGlobalPipes(
  new ValidationPipe({
    whitelist: true,     // strip unknown properties
    transform: true,     // auto-coerce e.g. string "123" → number 123
  })
);

If you’re reaching for Zod instead of class-validator (which is valid — see Article 2.3), you need a custom pipe that calls schema.parse(value) and maps Zod errors to NestJS’s BadRequestException.

4. Frontend Framework Choices Are Not Interchangeable at the Meta-Framework Level

It’s tempting to think React and Vue are drop-in alternatives. For simple component rendering, they are roughly comparable. But at the meta-framework level, Next.js and Nuxt have diverged significantly in their architecture and have different ecosystem assumptions:

  • Next.js Server Components are a React-specific concept. There is no direct Nuxt equivalent (Nuxt uses a different server rendering model).
  • Nuxt’s auto-imports and Pinia state management are Vue-specific ecosystem choices that have no React analog.
  • Libraries that work with React (Radix, shadcn/ui, React Hook Form) don’t work with Vue, and vice versa.

When you read a job description or a project description and see “React” or “Vue,” treat it as a fundamental architectural axis — similar to how “WPF” and “ASP.NET” in .NET imply different component ecosystems despite both being .NET.

5. Express Middleware Is Not the Same as NestJS Middleware

If you encounter an Express middleware package (there are thousands) and want to use it in NestJS, the integration is possible but not automatic. Express middleware is a function signature (req, res, next) => void. NestJS middleware has the same signature and can wrap Express middleware, but NestJS Guards, Interceptors, and Pipes are NestJS-specific and do not accept raw Express middleware.

// This works — using an Express middleware package inside NestJS middleware
import { Injectable, NestMiddleware } from '@nestjs/common';
import cors from 'cors';

@Injectable()
export class CorsMiddleware implements NestMiddleware {
  use(req: Request, res: Response, next: () => void) {
    cors()(req as any, res as any, next);  // wrapping Express middleware
  }
}

// This does NOT work — you cannot use an Express middleware as a NestJS Guard
// Guards have a completely different signature and purpose

NestJS already integrates CORS, rate limiting, compression, and other common concerns directly — reach for @nestjs/* packages first before wrapping raw Express middleware.

6. “Framework” Means Different Things in Frontend vs. Backend Contexts

React is commonly called a “framework” by practitioners even though it’s technically a library (the React team calls it a library). Angular is unambiguously a framework. This inconsistency in terminology causes real confusion when reading job posts, articles, and documentation.

A working rule: if you hear “framework” in a frontend context, ask whether they mean the rendering library (React/Vue/Angular) or the meta-framework (Next.js/Nuxt/Remix/Angular). In a backend context, “framework” almost always means Express, Fastify, or NestJS.


Hands-On Exercise

The goal of this exercise is to build a minimal NestJS module and viscerally experience how the pieces connect. This is not a toy example — it mirrors the structure you’ll use on real projects.

What you’ll build: A ProductsModule in NestJS with a controller and service that returns a list of products. No database — use a hardcoded in-memory list for now. The point is the wiring, not the data.

Prerequisites:

  • Node.js 20+ installed
  • pnpm installed (npm install -g pnpm)

Step 1: Create a new NestJS project

npx @nestjs/cli new products-api
cd products-api
pnpm install  # or npm install

Step 2: Generate the module, controller, and service using the NestJS CLI

# The NestJS CLI generates the same scaffolding that dotnet new generates
npx nest generate module products
npx nest generate controller products
npx nest generate service products

This generates src/products/products.module.ts, products.controller.ts, and products.service.ts, and automatically updates app.module.ts to import ProductsModule. This is the dotnet new + project reference equivalent.

Step 3: Implement the service

Open src/products/products.service.ts and replace its content:

import { Injectable } from '@nestjs/common';

export interface Product {
  id: string;
  name: string;
  price: number;
  inStock: boolean;
}

@Injectable()
export class ProductsService {
  private readonly products: Product[] = [
    { id: '1', name: 'Widget Pro', price: 29.99, inStock: true },
    { id: '2', name: 'Gadget Plus', price: 49.99, inStock: false },
    { id: '3', name: 'Doohickey Max', price: 14.99, inStock: true },
  ];

  findAll(): Product[] {
    return this.products;
  }

  findOne(id: string): Product | undefined {
    return this.products.find((p) => p.id === id);
  }
}

Step 4: Implement the controller

Open src/products/products.controller.ts:

import { Controller, Get, Param, NotFoundException } from '@nestjs/common';
import { ProductsService } from './products.service';

@Controller('products')
export class ProductsController {
  constructor(private readonly productsService: ProductsService) {}

  @Get()
  findAll() {
    return this.productsService.findAll();
  }

  @Get(':id')
  findOne(@Param('id') id: string) {
    const product = this.productsService.findOne(id);
    if (!product) {
      throw new NotFoundException(`Product ${id} not found`);
    }
    return product;
  }
}

Step 5: Run and test

pnpm run start:dev
# NestJS starts with hot-reload (like dotnet watch)

In another terminal:

curl http://localhost:3000/products
# Returns the array of all products

curl http://localhost:3000/products/1
# Returns Widget Pro

curl http://localhost:3000/products/99
# Returns 404 with { "message": "Product 99 not found", "error": "Not Found", "statusCode": 404 }

Step 6: Reflect on what you just built

Look at src/app.module.ts — NestJS automatically added ProductsModule to imports when you ran the generate commands. This is the equivalent of builder.Services.AddScoped<IProductsService, ProductsService>() + controller registration in Program.cs, but split across a dedicated module file.

Now trace the DI chain: ProductsController declares private readonly productsService: ProductsService in its constructor. NestJS sees that ProductsService is in ProductsModule.providers and the ProductsController is in ProductsModule.controllers, so it resolves the dependency automatically. No [FromServices], no manual app.Services.GetService<T>() — the module system handles it.

Stretch challenge: Add a POST /products endpoint that accepts a body with name, price, and inStock. Add class-validator and configure a global ValidationPipe in main.ts. Verify that sending an invalid body (missing required fields, wrong types) returns a 400 with field-level errors. This is the equivalent of adding [ApiController] + Data Annotations to an ASP.NET Core controller.


Quick Reference

Frontend Framework Chooser

SituationChoose
New React project (our default)React + Next.js
New Vue projectVue 3 + Nuxt
Reading/maintaining an existing codebaseWhatever framework it uses — learn enough to contribute
You need edge/serverless renderingNext.js (React) or Nuxt (Vue) — both support it
You encounter Angular at a clientTreat it like ASP.NET Core — the patterns are familiar

Backend Framework Chooser

SituationChoose
New API that needs structure, DI, validationNestJS
Minimal API inside a Next.js projectNext.js API routes (route.ts)
Ultra-lightweight edge functionHono
Existing .NET backend you want to keepKeep it — see Track 4B

Concept Mapping Cheat Sheet

.NET ConceptJS/TS Equivalent
[ApiController] + ControllerBase@Controller()
[HttpGet("{id}")]@Get(':id')
[Authorize]@UseGuards(JwtAuthGuard)
IServiceCollection.AddScoped<T>()providers: [T] in @Module()
IServiceCollection.AddSingleton<T>()providers: [{ provide: T, scope: Scope.DEFAULT }]
builder.Services.AddAuthentication()PassportModule.register() / Clerk config
[FromBody]@Body()
[FromQuery("page")]@Query('page')
[FromRoute]@Param('id')
ILogger<T>new Logger(ClassName.name)
IConfigurationConfigService from @nestjs/config
IActionFilterInterceptor
IAuthorizationFilterGuard
IExceptionFilterExceptionFilter
IModelBinder / model bindingPipe
ProblemDetails / NotFound()NotFoundException / HttpException
Swashbuckle@nestjs/swagger
IHostedService@nestjs/schedule + @Cron()
HangfireBull/BullMQ
SignalR@WebSocketGateway() + Socket.io

Library vs. Framework vs. Meta-Framework

graph TD
    subgraph Frontend["Frontend Stack"]
        MF1["Meta-Framework\n(Next.js, Nuxt, Remix)"]
        FL["Framework/Library\n(React, Vue)"]
        RT1["Runtime\n(Node.js + V8)"]
        MF1 --> FL --> RT1
    end

    subgraph Backend["Backend Stack"]
        MF2["Meta-Framework\n(NestJS)"]
        HE["HTTP Engine\n(Express or Fastify)"]
        RT2["Runtime\n(Node.js + V8)"]
        MF2 --> HE --> RT2
    end

Further Reading

  • NestJS Official Documentation — The authoritative source. The “Overview” section maps well to ASP.NET engineers. Start with Controllers, Providers, Modules.
  • Next.js App Router Documentation — Covers the App Router architecture in depth, including Server Components and the routing conventions.
  • State of JS 2024 Survey — Annual survey of the JS ecosystem. Useful for understanding which libraries are gaining or losing adoption, so you can calibrate which things are worth learning.
  • React Documentation — Thinking in React — The canonical explanation of React’s mental model. Read this after the React fundamentals article (3.1) if the one-way data flow model feels unnatural.

Cross-reference: This article establishes what each library and framework IS in the JS ecosystem. When you’re deciding whether to build your backend in NestJS at all — or whether to keep your existing .NET API and use Next.js as a typed frontend on top of it — see Track 4B, specifically Article 4B.1 (.NET as API) and Article 4B.3 (the decision framework). That track is for teams that have significant .NET investment and want to evaluate when NestJS is the right choice versus keeping C# doing what it does best.

Async/Await: Same Keywords, Different Universe

For .NET engineers who know: C# async/await, Task<T>, Task.WhenAll(), ConfigureAwait(false), and the thread pool model You’ll learn: Why TypeScript’s async/await looks identical to C#’s but operates on a completely different runtime model — and the specific traps that catch .NET engineers off guard Time: 15-20 minutes

The .NET Way (What You Already Know)

In C#, async/await is a compiler transformation over the thread pool. When you await a Task, you yield the current thread back to the thread pool. The runtime captures the current SynchronizationContext and, when the awaited work completes, resumes your code on an appropriate thread — either the original context thread (in ASP.NET Framework or UI apps) or any thread pool thread (in ASP.NET Core, which has no SynchronizationContext).

This is why ConfigureAwait(false) exists: in library code, you explicitly opt out of capturing the synchronization context to avoid deadlocks and improve throughput. In ASP.NET Core you mostly don’t need it, but you’ve probably written it a thousand times in library code, or been told to.

// C# — Await yields the current thread; another thread resumes the continuation
public async Task<Order> GetOrderAsync(int id)
{
    // Thread pool thread T1 starts here
    var order = await _db.Orders.FindAsync(id);
    // T1 is released during the DB call
    // Thread pool thread T2 (possibly the same as T1) resumes here

    var tax = await _taxService.CalculateAsync(order);
    // T2 is released; another thread picks up the continuation
    return order with { Tax = tax };
}

The Task<T> type is the fundamental unit of async work. Task.WhenAll() runs tasks in parallel. Task.WhenAny() returns when the first task completes. AggregateException wraps multiple failures from parallel operations.

You also know that forgetting await compiles fine and produces a Task<T> instead of T — a silent bug you’ve probably chased.

The TypeScript Way

There Are No Threads

Node.js is single-threaded. There is no thread pool, no synchronization context, and no thread switching. When you await a Promise in TypeScript, you yield to the event loop — a single-threaded scheduler that processes callbacks, I/O events, and timers in phases.

// TypeScript — Await yields to the event loop; the same thread resumes the continuation
async function getOrder(id: number): Promise<Order> {
  // The single thread starts here
  const order = await db.orders.findUnique({ where: { id } });
  // The thread is released to the event loop during the I/O wait
  // The same thread resumes here when the DB responds

  const tax = await taxService.calculate(order);
  // Same thread again
  return { ...order, tax };
}

The execution model looks the same from inside the async function, but the runtime mechanics are completely different. In C#, multiple threads may touch your async chain. In TypeScript, one thread always does — it just handles other things between your awaits.

This has real consequences:

  • No race conditions on shared mutable state (within a single Node.js process, between async operations). Two await points cannot interleave on different threads because there is only one thread.
  • CPU-bound work blocks everything. A tight computation loop blocks the event loop entirely. There is no Task.Run(() => HeavyCpuWork()) equivalent that offloads to a thread pool — you need Worker Threads for that, which is a different topic.
  • No ConfigureAwait(false). There is no synchronization context to capture. You will never write it; you do not need to think about it.

Promises vs. Tasks

Promise<T> is the TypeScript equivalent of Task<T>. The mapping is close but not exact.

// TypeScript
const promise: Promise<string> = fetch("https://api.example.com/data")
  .then((res) => res.json())
  .then((data) => data.name);
// C# equivalent
Task<string> task = httpClient
    .GetAsync("https://api.example.com/data")
    .ContinueWith(t => t.Result.Content.ReadFromJsonAsync<Response>())
    .ContinueWith(t => t.Result.Name);

With async/await, both collapse into the same readable form:

// TypeScript
async function getName(): Promise<string> {
  const res = await fetch("https://api.example.com/data");
  const data = await res.json();
  return data.name;
}
// C#
async Task<string> GetNameAsync()
{
    var res = await _httpClient.GetAsync("https://api.example.com/data");
    var data = await res.Content.ReadFromJsonAsync<Response>();
    return data.Name;
}

One critical difference: Promise is eager. The moment you create a Promise, the work starts. Task can be “hot” (already running) or “cold” depending on how it was created, but most APIs return hot tasks. For practical purposes, treat both as: work starts when the object is created.

Promise.all() vs. Task.WhenAll()

Running operations in parallel is one of the most common async patterns, and this is where .NET engineers make their first performance mistake in TypeScript.

// C# — Parallel execution with Task.WhenAll
async Task<(User user, IEnumerable<Order> orders)> GetUserWithOrdersAsync(int userId)
{
    var userTask = _userService.GetByIdAsync(userId);
    var ordersTask = _orderService.GetByUserAsync(userId);

    await Task.WhenAll(userTask, ordersTask);

    return (userTask.Result, ordersTask.Result);
}
// TypeScript — Parallel execution with Promise.all
async function getUserWithOrders(
  userId: number
): Promise<{ user: User; orders: Order[] }> {
  const [user, orders] = await Promise.all([
    userService.getById(userId),
    orderService.getByUser(userId),
  ]);

  return { user, orders };
}

Promise.all() takes an array of Promises and returns a single Promise that resolves when all input Promises resolve. If any one rejects, the returned Promise rejects immediately with that error — the others are abandoned (they still run to completion, but their results are discarded).

This mirrors Task.WhenAll() behavior for the failure case. The difference is that Task.WhenAll() waits for all tasks to finish before throwing, aggregating all exceptions into an AggregateException. Promise.all() short-circuits on the first rejection. If you need all results — successes and failures — use Promise.allSettled().

// TypeScript — Collect all results, including failures
async function getUserAndOrders(userId: number) {
  const results = await Promise.allSettled([
    userService.getById(userId),
    orderService.getByUser(userId),
  ]);

  // results[0].status === 'fulfilled' | 'rejected'
  for (const result of results) {
    if (result.status === "rejected") {
      console.error("One operation failed:", result.reason);
    } else {
      // result.value is the resolved value
    }
  }
}
// C# — Task.WhenAll() always waits for all tasks; exceptions are aggregated
async Task GetAllAsync()
{
    try
    {
        await Task.WhenAll(task1, task2, task3);
    }
    catch (AggregateException ex)
    {
        // ex.InnerExceptions contains ALL failures
        foreach (var inner in ex.InnerExceptions)
            Console.WriteLine(inner.Message);
    }
}
C#TypeScriptBehavior on failure
Task.WhenAll()Promise.all()Waits for all (C#) / short-circuits on first (TS)
Task.WhenAll() + inspect each taskPromise.allSettled()All results collected, no short-circuit
Task.WhenAny()Promise.race()First to complete wins
Promise.any()First to succeed wins (ignores rejections)

Promise.race() vs. Task.WhenAny()

Promise.race() resolves or rejects as soon as the first Promise in the array settles — for better or worse.

// C# — Return whichever completes first
async Task<string> GetFastestResultAsync()
{
    var task1 = FetchFromPrimaryAsync();
    var task2 = FetchFromSecondaryAsync();
    var firstTask = await Task.WhenAny(task1, task2);
    return await firstTask; // unwrap the result
}
// TypeScript — Same pattern, cleaner syntax
async function getFastestResult(): Promise<string> {
  return await Promise.race([fetchFromPrimary(), fetchFromSecondary()]);
}

A common use case in both ecosystems is implementing a timeout:

// TypeScript — Timeout pattern
function withTimeout<T>(promise: Promise<T>, ms: number): Promise<T> {
  const timeout = new Promise<never>((_, reject) =>
    setTimeout(() => reject(new Error(`Timed out after ${ms}ms`)), ms)
  );
  return Promise.race([promise, timeout]);
}

// Usage
const result = await withTimeout(slowApiCall(), 5000);
// C# equivalent — CancellationToken is the idiomatic approach,
// but you can also race against Task.Delay
async Task<string> WithTimeoutAsync(Task<string> task, int ms)
{
    var timeoutTask = Task.Delay(ms).ContinueWith(_ =>
        throw new TimeoutException());
    var winner = await Task.WhenAny(task, timeoutTask);
    return await winner;
}

Error Handling

In C#, exceptions propagate through await naturally. A faulted Task throws when you await it. AggregateException appears when you call .Result or .Wait() directly, or when Task.WhenAll() collects multiple failures.

In TypeScript, a rejected Promise throws when you await it. The try/catch blocks look identical:

// TypeScript
async function processOrder(id: number): Promise<void> {
  try {
    const order = await orderService.get(id);
    await paymentService.charge(order);
    await notificationService.send(order);
  } catch (err) {
    // err is typed as 'unknown' in strict TypeScript
    if (err instanceof PaymentError) {
      await orderService.markFailed(id);
    }
    throw err; // re-throw if not handled
  }
}
// C#
async Task ProcessOrderAsync(int id)
{
    try
    {
        var order = await _orderService.GetAsync(id);
        await _paymentService.ChargeAsync(order);
        await _notificationService.SendAsync(order);
    }
    catch (PaymentException ex)
    {
        await _orderService.MarkFailedAsync(id);
        throw;
    }
}

The structural difference: in TypeScript catch, the error is typed as unknown (with strict mode and useUnknownInCatchVariables). You must narrow the type before using it. This is the correct behavior — JavaScript throw can throw anything, not just Error objects.

// TypeScript — Proper error narrowing
try {
  await riskyOperation();
} catch (err) {
  // err: unknown
  if (err instanceof Error) {
    console.error(err.message);   // now safe
    console.error(err.stack);     // now safe
  } else {
    console.error("Unknown error:", String(err));
  }
}

There is no AggregateException in TypeScript. When Promise.all() rejects, you get the first rejection error directly — it is not wrapped. If you need to inspect all errors from parallel operations, use Promise.allSettled().

The Callback Era (Context You Need)

JavaScript had async/await added in 2017. Before that, async code used callbacks:

// The callback hell you'll see in old code
fs.readFile("data.json", "utf8", function (err, data) {
  if (err) {
    console.error(err);
    return;
  }
  JSON.parse(data, function (parseErr, parsed) {
    // This isn't even real API — just illustrating the nesting
    database.save(parsed, function (saveErr, result) {
      if (saveErr) {
        console.error(saveErr);
        return;
      }
      console.log("Saved:", result);
    });
  });
});

Promises arrived first as a library pattern, then as a language primitive. async/await is syntax sugar over Promises — await unwraps a Promise, and an async function always returns a Promise.

You’ll encounter callback-style APIs in Node.js core (many fs, http, crypto functions still have callback variants). The standard way to promisify them is util.promisify():

import { promisify } from "util";
import { readFile } from "fs";

const readFileAsync = promisify(readFile);

async function readConfig(): Promise<string> {
  const buffer = await readFileAsync("config.json", "utf8");
  return buffer;
}

Modern Node.js provides fs/promises (and similar modules for other core APIs) that are natively Promise-based, so you rarely need promisify in new code. But when you’re working with third-party libraries from the callback era, you’ll need it.

The Async IIFE Pattern

In C#, you can’t use await at the top level of a class or static method without wrapping it in an async method — though C# 9 added top-level statements that allow it in Program.cs.

In older JavaScript and in some module contexts, you’ll see the async IIFE (immediately invoked function expression) pattern, which creates and immediately calls an async function to get a scope where await is valid:

// Async IIFE — creates an async scope and executes immediately
(async () => {
  const data = await fetchSomeData();
  console.log(data);
})();

// The outer () invokes the async function immediately
// Any rejection here is an unhandled rejection — you must catch it
(async () => {
  try {
    const data = await fetchSomeData();
    console.log(data);
  } catch (err) {
    console.error("Top-level failure:", err);
    process.exit(1);
  }
})();

Modern TypeScript and Node.js support top-level await in ES modules (files with "type": "module" in package.json, or .mts files). This eliminates the need for the IIFE pattern in most cases:

// Top-level await — works in ES modules
const config = await loadConfig();
const db = await connectToDatabase(config.databaseUrl);

console.log("Server ready");

You’ll still encounter async IIFEs in non-module contexts, in test setup code, and in older codebases. Recognize the pattern; you rarely need to write it yourself.

Key Differences

ConceptC#TypeScript
Runtime modelThread pool, multiple threadsSingle-threaded event loop
Async primitiveTask<T> / ValueTask<T>Promise<T>
ConfigureAwait(false)Required in library codeDoes not exist; not needed
Parallel executionTask.WhenAll()Promise.all()
First-to-completeTask.WhenAny()Promise.race()
All results including failuresInspect each Task after WhenAllPromise.allSettled()
Failure on first successNo direct equivalentPromise.any()
Error type from parallel opsAggregateException (all errors)First rejection (single error)
Error type in catchTyped as declared exceptionunknown (must narrow)
Top-level awaitC# 9+ top-level statementsES modules only
CPU-bound parallelismTask.Run() + thread poolWorker Threads (separate module)
CancellationCancellationTokenAbortController / AbortSignal
Unhandled async errorsCrash by default (modern .NET)Warning by default; crash with flag

Gotchas for .NET Engineers

Gotcha 1: Forgetting await Is Silent and Produces the Wrong Type

In C#, forgetting await gives you a Task<T> where you expected a T. The compiler often catches this because you’ll try to use a Task<User> where a User is expected, and it won’t compile.

In TypeScript, the same mistake still compiles and runs — you get a Promise<User> where you expected a User. If you then pass it to something that accepts any or unknown, or log it, it prints [object Promise] and no error is thrown.

// This compiles. It is wrong.
async function processUser(id: number): Promise<void> {
  const user = userService.getById(id); // Missing await
  // user is Promise<User>, not User

  console.log(user.name); // undefined — Promise has no .name property
  // TypeScript may warn here if types are strict, but won't always
}
// This is also silently broken
async function saveAndReturn(data: UserInput): Promise<User> {
  const user = userService.create(data); // Missing await
  return user; // Returns Promise<User>, which async wraps in another Promise
  // Caller gets Promise<Promise<User>> — this actually works by accident
  // because await on a thenable unwraps recursively. But the intent is wrong.
}

Enable strict TypeScript and use no-floating-promises in your ESLint configuration. It catches Promises that are created but not awaited or returned:

// .eslintrc or eslint.config.mjs
{
  "rules": {
    "@typescript-eslint/no-floating-promises": "error"
  }
}

Gotcha 2: Sequential Execution When You Intended Parallel

This is the single most common performance mistake .NET engineers make when writing TypeScript for the first time. It looks correct and runs fine — it’s just slow.

// WRONG — Sequential. Each await blocks the next.
async function getDashboardData(userId: number) {
  const user = await userService.getById(userId);    // waits ~50ms
  const orders = await orderService.getByUser(userId); // waits ~80ms
  const notifications = await notificationService.getUnread(userId); // waits ~30ms

  return { user, orders, notifications };
  // Total: ~160ms
}
// CORRECT — Parallel. All three start immediately.
async function getDashboardData(userId: number) {
  const [user, orders, notifications] = await Promise.all([
    userService.getById(userId),
    orderService.getByUser(userId),
    notificationService.getUnread(userId),
  ]);

  return { user, orders, notifications };
  // Total: ~80ms (the slowest single operation)
}

The sequential version is not wrong in the way a bug is wrong — it produces correct results. It is wrong in the way a performance anti-pattern is wrong. In C#, you’d naturally reach for Task.WhenAll() here. In TypeScript, train yourself to reach for Promise.all() whenever you have independent operations.

The only time sequential await is correct is when each operation depends on the result of the previous one.

Gotcha 3: Unhandled Promise Rejections Are Dangerous

In C#, forgetting to await a Task that throws results in an unobserved task exception. Modern .NET raises TaskScheduler.UnobservedTaskException and, depending on configuration, may terminate the process.

In Node.js, unhandled Promise rejections print a warning and, since Node.js 15, terminate the process with exit code 1:

UnhandledPromiseRejectionWarning: Error: DB connection failed
    at Object.<anonymous> (server.ts:12:15)

This means that fire-and-forget async patterns — which might be acceptable in .NET with proper exception handling — are dangerous in Node.js:

// DANGEROUS — If sendEmail rejects, the process may crash
function handleUserSignup(user: User): void {
  emailService.sendWelcome(user); // Missing await, no .catch()
  // Execution continues, but a rejection floats unhandled
}
// SAFE — Explicitly handle the floating promise
function handleUserSignup(user: User): void {
  emailService
    .sendWelcome(user)
    .catch((err) => logger.error("Welcome email failed", { userId: user.id, err }));
}

Register a global handler to catch anything that slips through:

// In your server startup
process.on("unhandledRejection", (reason, promise) => {
  logger.error("Unhandled promise rejection", { reason, promise });
  // In production, crash and let your process manager restart
  process.exit(1);
});

Gotcha 4: Promise.all() Abandons Other Promises on First Failure

In C#, Task.WhenAll() waits for all tasks to complete (including failures) before throwing. This means if you start three parallel operations and one fails, the other two still run to completion — their results are just not returned.

Promise.all() short-circuits: the moment one Promise rejects, the returned Promise rejects immediately. The other Promises are not cancelled (there is no cancellation at the Promise level), but their results are discarded.

// If getOrders() rejects after 10ms,
// getNotifications() continues running but its result is discarded.
// This can leave database connections or resources in unexpected states.
const [user, orders, notifications] = await Promise.all([
  getUser(userId),     // completes in 50ms
  getOrders(userId),   // rejects in 10ms — Promise.all rejects immediately
  getNotifications(userId), // still running, result ignored
]);

If all three operations write to a database or acquire resources, you may end up with partial state. Use Promise.allSettled() when you need to ensure cleanup happens regardless of failures.

Gotcha 5: async in forEach Does Not Work the Way You Expect

This one burns nearly every .NET engineer. In C#, await inside a foreach loop is straightforward — it awaits each iteration before moving to the next.

// C# — Works as expected
foreach (var id in orderIds)
{
    await ProcessOrderAsync(id); // Sequential, as intended
}
// BROKEN — forEach does not await the async callback
orderIds.forEach(async (id) => {
  await processOrder(id); // These all start in parallel AND forEach returns immediately
});
// Execution continues here before any orders are processed
// CORRECT — Use for...of for sequential async iteration
for (const id of orderIds) {
  await processOrder(id); // Properly sequential
}

// CORRECT — Use Promise.all with .map() for parallel async iteration
await Promise.all(orderIds.map((id) => processOrder(id)));

Array.prototype.forEach was designed before Promises existed. It ignores the return value of its callback. An async function returns a Promise, and forEach throws that Promise away. Always use for...of for sequential async loops, or Promise.all() with .map() for parallel async loops.

Hands-On Exercise

The following exercises target the specific patterns that trip up .NET engineers. Work through each one in a TypeScript file you can run with npx tsx exercise.ts.

Setup:

// exercise.ts — paste this, then implement the TODOs below

// Simulated async operations with realistic delays
function delay(ms: number): Promise<void> {
  return new Promise((resolve) => setTimeout(resolve, ms));
}

async function fetchUser(id: number): Promise<{ id: number; name: string }> {
  await delay(100);
  return { id, name: `User ${id}` };
}

async function fetchOrders(
  userId: number
): Promise<Array<{ id: number; total: number }>> {
  await delay(150);
  return [
    { id: 1, total: 99.99 },
    { id: 2, total: 149.5 },
  ];
}

async function fetchPermissions(userId: number): Promise<string[]> {
  await delay(80);
  return ["read", "write"];
}

async function processOrder(orderId: number): Promise<void> {
  await delay(50);
  if (orderId === 2) throw new Error(`Order ${orderId} failed validation`);
}

Exercise 1 — Fix the Sequential Bottleneck:

// This function takes ~330ms. Rewrite it to take ~150ms.
async function getDashboard(userId: number) {
  const user = await fetchUser(userId);
  const orders = await fetchOrders(userId);
  const permissions = await fetchPermissions(userId);
  return { user, orders, permissions };
}

// Verify your answer: log performance.now() before and after calling getDashboard(1)

Exercise 2 — Handle Partial Failures:

// This crashes if any order fails. Rewrite it to process all orders
// and return a summary: { succeeded: number[], failed: Array<{id: number, error: string}> }
async function processAllOrders(orderIds: number[]) {
  await Promise.all(orderIds.map(processOrder));
}

// Test with: processAllOrders([1, 2, 3])
// Order 2 always fails — your function should not crash

Exercise 3 — Fix the forEach Bug:

// This function returns before any orders are processed.
// Fix it so all orders complete before returning, running sequentially.
async function processOrdersSequentially(orderIds: number[]): Promise<void> {
  orderIds.forEach(async (id) => {
    await processOrder(id);
    console.log(`Processed order ${id}`);
  });
}

Exercise 4 — Implement a Timeout:

// fetchOrders takes 150ms. Implement withTimeout() so this fails:
// await withTimeout(fetchOrders(1), 100);
// And this succeeds:
// await withTimeout(fetchOrders(1), 200);

function withTimeout<T>(promise: Promise<T>, ms: number): Promise<T> {
  // TODO
}

Expected outputs and solutions are in the appendix at the bottom of this article.


Exercise Solutions:

// Exercise 1 — Parallel execution
async function getDashboard(userId: number) {
  const [user, orders, permissions] = await Promise.all([
    fetchUser(userId),
    fetchOrders(userId),
    fetchPermissions(userId),
  ]);
  return { user, orders, permissions };
}

// Exercise 2 — Partial failure handling
async function processAllOrders(orderIds: number[]) {
  const results = await Promise.allSettled(orderIds.map(processOrder));
  const succeeded: number[] = [];
  const failed: Array<{ id: number; error: string }> = [];

  results.forEach((result, index) => {
    if (result.status === "fulfilled") {
      succeeded.push(orderIds[index]);
    } else {
      failed.push({ id: orderIds[index], error: result.reason.message });
    }
  });

  return { succeeded, failed };
}

// Exercise 3 — Fix forEach
async function processOrdersSequentially(orderIds: number[]): Promise<void> {
  for (const id of orderIds) {
    await processOrder(id);
    console.log(`Processed order ${id}`);
  }
}

// Exercise 4 — Timeout
function withTimeout<T>(promise: Promise<T>, ms: number): Promise<T> {
  const timeoutPromise = new Promise<never>((_, reject) =>
    setTimeout(() => reject(new Error(`Operation timed out after ${ms}ms`)), ms)
  );
  return Promise.race([promise, timeoutPromise]);
}

Quick Reference

C# PatternTypeScript EquivalentNotes
Task<T>Promise<T>Both represent eventual values
ValueTask<T>No equivalent; Promises have no sync fast-path
await taskawait promiseIdentical syntax, different runtime
Task.WhenAll(t1, t2)Promise.all([p1, p2])TS short-circuits on first failure; C# collects all
Task.WhenAny(t1, t2)Promise.race([p1, p2])First to settle wins
Promise.allSettled([p1, p2])Collect all results including failures
Promise.any([p1, p2])First to succeed wins
AggregateExceptionFirst rejection errorTS does not aggregate
CancellationTokenAbortController / AbortSignalPass signal to fetch() and compatible APIs
ConfigureAwait(false)Does not exist; not needed
Task.Run(() => work)Worker ThreadsDifferent API; only for CPU-bound work
Task.Delay(ms)new Promise(r => setTimeout(r, ms))Or use a delay utility
Task.CompletedTaskPromise.resolve()Already-resolved Promise
Task.FromResult(v)Promise.resolve(v)Already-resolved with a value
Task.FromException(ex)Promise.reject(err)Already-rejected Promise
foreach + awaitfor...of + awaitNever use forEach with async callbacks
Task.WhenAll + .mapPromise.all(arr.map(fn))Parallel async over a collection
Unobserved task exceptionUnhandled rejectionRegister process.on('unhandledRejection', ...)
.Result / .Wait()No synchronous unwrap; always await
async Task Main()Top-level await (ESM)Or async IIFE in non-module context

Further Reading

1.8 — Error Handling: Exceptions vs. the JS Way

For .NET engineers who know: C# exception hierarchy, try/catch/finally, exception filters, IExceptionHandler / global exception middleware in ASP.NET Core You’ll learn: How JavaScript’s minimal error model works, why the JS community reached for return-value patterns, and how to build a disciplined error-handling strategy in TypeScript across NestJS and React Time: 15-20 minutes


The .NET Way (What You Already Know)

C# gives you a rich, typed exception hierarchy. System.Exception sits at the root, below it SystemException and ApplicationException, and below those hundreds of concrete types (ArgumentNullException, InvalidOperationException, DbUpdateConcurrencyException, and your own custom hierarchy).

// C# — Custom exception hierarchy
public class DomainException : ApplicationException
{
    public DomainException(string message) : base(message) { }
}

public class OrderNotFoundException : DomainException
{
    public int OrderId { get; }
    public OrderNotFoundException(int orderId)
        : base($"Order {orderId} was not found.")
    {
        OrderId = orderId;
    }
}

public class InsufficientInventoryException : DomainException
{
    public string Sku { get; }
    public int Requested { get; }
    public int Available { get; }

    public InsufficientInventoryException(string sku, int requested, int available)
        : base($"SKU {sku}: requested {requested}, available {available}.")
    {
        Sku = sku;
        Requested = requested;
        Available = available;
    }
}

You catch by type, leveraging the hierarchy:

// C# — catch by type, exception filters
try
{
    await orderService.PlaceOrderAsync(dto);
}
catch (OrderNotFoundException ex) when (ex.OrderId > 0)
{
    // exception filter: only catches if the condition is true
    logger.LogWarning("Order {OrderId} not found", ex.OrderId);
    return NotFound(ex.Message);
}
catch (DomainException ex)
{
    // catches any remaining DomainException subclass
    return BadRequest(ex.Message);
}
catch (Exception ex)
{
    // last-resort catch
    logger.LogError(ex, "Unexpected error placing order");
    return StatusCode(500, "Internal server error");
}
finally
{
    metrics.IncrementOrderAttempts();
}

And you have global exception handling in ASP.NET Core via IExceptionHandler (or the classic UseExceptionHandler middleware), which gives you one place to translate unhandled exceptions into consistent HTTP responses:

// C# — Global exception handler (ASP.NET Core 8+)
public class GlobalExceptionHandler : IExceptionHandler
{
    public async ValueTask<bool> TryHandleAsync(
        HttpContext context,
        Exception exception,
        CancellationToken cancellationToken)
    {
        var (statusCode, title) = exception switch
        {
            OrderNotFoundException => (404, "Not Found"),
            DomainException => (400, "Bad Request"),
            _ => (500, "Internal Server Error")
        };

        context.Response.StatusCode = statusCode;
        await context.Response.WriteAsJsonAsync(
            new ProblemDetails { Title = title, Detail = exception.Message },
            cancellationToken);

        return true;
    }
}

The model is: throw everywhere, catch at the boundary, translate to HTTP once. It works well because the CLR guarantees that anything thrown is an Exception — you always know what you’re catching.


The JavaScript Way

What JavaScript Actually Has

JavaScript’s built-in Error class is surprisingly thin:

// TypeScript — the Error class
const e = new Error("something went wrong");
e.message;  // "something went wrong"
e.name;     // "Error"
e.stack;    // runtime-specific stack trace string

// Built-in subclasses (all inherit Error)
new TypeError("expected string, got number");
new RangeError("index out of bounds");
new SyntaxError("unexpected token");
new ReferenceError("x is not defined");
new URIError("malformed URI");
new EvalError("...");  // essentially never used

That is the entirety of the standard hierarchy. There is no ApplicationException. There is no checked exception system. There is no AggregateException (though Promise.allSettled and AggregateError partially fill that gap). The standard library throws on programmer errors — TypeError, RangeError — but for domain errors, there is no conventional base class.

Extending Error in TypeScript

You can extend Error, but there is a well-known pitfall with TypeScript and transpilation targets below ES2015:

// TypeScript — extending Error correctly
export class DomainError extends Error {
    constructor(message: string) {
        super(message);
        // Required when targeting ES5 or ES2015 with TypeScript's
        // downlevel emit: the prototype chain breaks without this.
        Object.setPrototypeOf(this, new.target.prototype);
        this.name = new.target.name; // "DomainError", not "Error"
    }
}

export class OrderNotFoundError extends DomainError {
    constructor(public readonly orderId: number) {
        super(`Order ${orderId} was not found.`);
        Object.setPrototypeOf(this, new.target.prototype);
    }
}

export class InsufficientInventoryError extends DomainError {
    constructor(
        public readonly sku: string,
        public readonly requested: number,
        public readonly available: number,
    ) {
        super(`SKU ${sku}: requested ${requested}, available ${available}.`);
        Object.setPrototypeOf(this, new.target.prototype);
    }
}

The Object.setPrototypeOf(this, new.target.prototype) call is the TypeScript equivalent of a footgun warning label. If you target ES2015 or higher (which you should in 2026 — see your tsconfig.json target field), you do not need it. But if you ever find instanceof returning false for a custom error class in production, this is why.

The Critical Difference: You Can Throw Anything

In C#, the compiler enforces that throw accepts only Exception-derived types. In JavaScript, you can throw any value:

// TypeScript — this compiles and runs without error
throw "a plain string";
throw 42;
throw { code: "ERR_BAD", detail: "something" };
throw null;
throw undefined;

This means your catch block cannot assume it received an Error:

// TypeScript — safe catch pattern
try {
    await riskyOperation();
} catch (err) {
    // err has type: unknown (with noUncheckedIndexedAccess and strict mode)
    // err has type: any (without strict)

    if (err instanceof Error) {
        console.error(err.message, err.stack);
    } else {
        // someone threw a non-Error value; handle defensively
        console.error("Non-Error thrown:", String(err));
    }
}

With "useUnknownInCatchVariables": true (enabled by TypeScript’s strict flag since 4.4), err in a catch block has type unknown, which forces you to narrow it before use. This is the correct behavior and you should have strict: true in your tsconfig.json.

A utility function to normalize caught values is worth having in your shared utilities:

// TypeScript — normalize unknown thrown value to Error
export function toError(thrown: unknown): Error {
    if (thrown instanceof Error) return thrown;
    if (typeof thrown === "string") return new Error(thrown);
    if (typeof thrown === "object" && thrown !== null) {
        return new Error(JSON.stringify(thrown));
    }
    return new Error(String(thrown));
}

// Usage
try {
    await riskyOperation();
} catch (err) {
    const error = toError(err);
    logger.error({ err: error }, "Operation failed");
}

try/catch/finally — What Is the Same, What Is Different

The syntax is identical to C#. The behavior differences are subtle:

// TypeScript — try/catch/finally, parallel to C#
async function fetchOrder(id: number): Promise<Order> {
    try {
        const row = await db.order.findUniqueOrThrow({ where: { id } });
        return mapToOrder(row);
    } catch (err) {
        if (err instanceof OrderNotFoundError) {
            throw err; // re-throw — no wrapping needed
        }
        // Prisma throws PrismaClientKnownRequestError with code "P2025"
        // when findUniqueOrThrow finds nothing. Translate it.
        if (isPrismaNotFoundError(err)) {
            throw new OrderNotFoundError(id);
        }
        throw err; // unknown error — let it propagate
    } finally {
        // runs regardless of outcome, same as C#
        metrics.recordDbQuery("order.findUnique");
    }
}

The key behavioral difference from C# is: finally runs even when the function is async and the catch block re-throws. JavaScript Promises handle this correctly. What is different from C# is that there are no when exception filters in JS — you do type narrowing inside the catch body.


The Result Pattern — When Not to Throw

The JavaScript community, especially TypeScript engineers influenced by Rust and functional programming, has embraced the Result<T, E> pattern for expected failure modes. The idea: rather than throwing for conditions that are part of normal operation (user not found, validation failed, payment declined), return a discriminated union that the caller is forced to handle.

This is not how C# normally works, but it has a clear analog: TryParse methods (int.TryParse, Dictionary.TryGetValue) that return bool and output the value via out parameters. The Result pattern is the functional version of that.

Rolling Your Own (Simple, No Dependencies)

// TypeScript — simple Result type
type Result<T, E extends Error = Error> =
    | { ok: true; value: T }
    | { ok: false; error: E };

function ok<T>(value: T): Result<T, never> {
    return { ok: true, value };
}

function err<E extends Error>(error: E): Result<never, E> {
    return { ok: false, error };
}

// Usage
async function findUser(
    email: string,
): Promise<Result<User, UserNotFoundError | DatabaseError>> {
    try {
        const row = await db.user.findUnique({ where: { email } });
        if (!row) return err(new UserNotFoundError(email));
        return ok(mapToUser(row));
    } catch (thrown) {
        return err(new DatabaseError(toError(thrown)));
    }
}

// Caller is forced to check
const result = await findUser("alice@example.com");
if (!result.ok) {
    if (result.error instanceof UserNotFoundError) {
        return { status: 404, body: "Not found" };
    }
    return { status: 500, body: "Database error" };
}
const user = result.value; // TypeScript knows this is User here

Using neverthrow

neverthrow is the most widely adopted Result library for TypeScript. It follows the Rust/Haskell model closely and adds methods like .map(), .mapErr(), .andThen() (flat-map) that let you chain operations without nested if (!result.ok) checks:

pnpm add neverthrow
// TypeScript — neverthrow
import { Result, ok, err, ResultAsync } from "neverthrow";

// ResultAsync wraps Promise<Result<T, E>> with the same chaining API
function findUser(email: string): ResultAsync<User, UserNotFoundError | DatabaseError> {
    return ResultAsync.fromPromise(
        db.user.findUniqueOrThrow({ where: { email } }),
        (thrown) => new DatabaseError(toError(thrown)),
    ).andThen((row) =>
        row ? ok(mapToUser(row)) : err(new UserNotFoundError(email)),
    );
}

function getProfile(
    email: string,
): ResultAsync<Profile, UserNotFoundError | DatabaseError | ProfileError> {
    return findUser(email)
        .andThen((user) => fetchProfile(user.id)) // chains only on ok
        .map((profile) => enrichProfile(profile)); // transforms value on ok
}

// At the call boundary
const result = await getProfile("alice@example.com");
result.match(
    (profile) => res.json(profile),       // ok branch
    (error) => res.status(mapStatus(error)).json({ message: error.message }),
);

The C# mental model for .andThen() is SelectMany / flatMap over a Maybe<T> or an Either<L, R>. If you have used Railway Oriented Programming in F# or functional patterns in C#, this will feel familiar.

When to Use Result vs. Throw

This is where the community has strong opinions. Here is the opinionated recommendation for this stack:

Throw (exceptions) for:

  • Programmer errors — null where null is impossible, index out of bounds, contract violations. These are bugs, not expected failure modes. Crashing loudly is correct.
  • Infrastructure failures with no reasonable local recovery — database is completely unreachable, file system full. These bubble up to your global handler.
  • Anywhere in your stack where a calling NestJS controller or React error boundary will catch and handle them. The framework does the heavy lifting.

Return Result for:

  • Domain operations with multiple expected outcomes the caller must handle: findUser might return UserNotFoundError; placeOrder might return InsufficientInventoryError or PaymentDeclinedError. These are not exceptional — they are anticipated branches.
  • Service-layer functions called by other service functions, where the caller wants to compose operations and handle errors uniformly.
  • Functions that can fail in multiple ways and the caller needs to distinguish the error type to respond differently.

The rule of thumb: if a caller should always handle the failure case, use Result. If a failure is genuinely unexpected and the framework should catch it, throw.


NestJS: Exception Filters vs. ASP.NET Exception Middleware

NestJS’s equivalent of ASP.NET’s IExceptionHandler / UseExceptionHandler is the Exception Filter. The conceptual mapping is direct.

ASP.NET Core                         NestJS
─────────────────────────────────    ─────────────────────────────────
IExceptionHandler                    ExceptionFilter (interface)
app.UseExceptionHandler(...)         app.useGlobalFilters(...)
[TypeFilter(typeof(MyFilter))]       @UseFilters(MyFilter)  (controller/route)
ProblemDetails response shape        custom or HttpException shape

Built-in HttpException Hierarchy

NestJS ships with a small but practical exception hierarchy:

import {
    HttpException,
    BadRequestException,    // 400
    UnauthorizedException,  // 401
    ForbiddenException,     // 403
    NotFoundException,      // 404
    ConflictException,      // 409
    UnprocessableEntityException, // 422
    InternalServerErrorException, // 500
} from "@nestjs/common";

// In a controller or service — throw NestJS HTTP exceptions
// and they are automatically translated to HTTP responses
throw new NotFoundException(`Order ${id} not found`);
throw new BadRequestException({ message: "Invalid input", fields: errors });

NestJS’s default exception filter catches anything that is an HttpException and writes a JSON response. Anything that is not an HttpException gets a generic 500 response. This is the equivalent of ASP.NET Core’s default exception handler behavior.

Writing a Global Exception Filter

For a real application you want one place that translates your domain errors to HTTP responses — exactly like ASP.NET’s IExceptionHandler:

// TypeScript — NestJS global exception filter
// src/common/filters/global-exception.filter.ts
import {
    ExceptionFilter,
    Catch,
    ArgumentsHost,
    HttpException,
    HttpStatus,
    Logger,
} from "@nestjs/common";
import { Request, Response } from "express";
import { DomainError, OrderNotFoundError, InsufficientInventoryError } from "../errors";

@Catch() // no argument = catch everything (like catch (Exception ex) in C#)
export class GlobalExceptionFilter implements ExceptionFilter {
    private readonly logger = new Logger(GlobalExceptionFilter.name);

    catch(exception: unknown, host: ArgumentsHost): void {
        const ctx = host.switchToHttp();
        const response = ctx.getResponse<Response>();
        const request = ctx.getRequest<Request>();

        const { status, message } = this.resolveError(exception);

        if (status >= 500) {
            this.logger.error(
                { err: exception, path: request.url },
                "Unhandled exception",
            );
        }

        response.status(status).json({
            statusCode: status,
            message,
            path: request.url,
            timestamp: new Date().toISOString(),
        });
    }

    private resolveError(exception: unknown): { status: number; message: string } {
        // NestJS HTTP exceptions — already translated
        if (exception instanceof HttpException) {
            return {
                status: exception.getStatus(),
                message: String(exception.message),
            };
        }

        // Domain errors — translate to HTTP status codes
        if (exception instanceof OrderNotFoundError) {
            return { status: HttpStatus.NOT_FOUND, message: exception.message };
        }
        if (exception instanceof InsufficientInventoryError) {
            return { status: HttpStatus.UNPROCESSABLE_ENTITY, message: exception.message };
        }
        if (exception instanceof DomainError) {
            return { status: HttpStatus.BAD_REQUEST, message: exception.message };
        }

        // Truly unexpected
        return {
            status: HttpStatus.INTERNAL_SERVER_ERROR,
            message: "An unexpected error occurred.",
        };
    }
}

Register it globally in main.ts:

// TypeScript — main.ts
import { NestFactory } from "@nestjs/core";
import { AppModule } from "./app.module";
import { GlobalExceptionFilter } from "./common/filters/global-exception.filter";

async function bootstrap() {
    const app = await NestFactory.create(AppModule);
    app.useGlobalFilters(new GlobalExceptionFilter());
    await app.listen(3000);
}
bootstrap();

You can also scope filters to a specific controller or route with the @UseFilters() decorator, which is the NestJS equivalent of ASP.NET’s [TypeFilter] or [ExceptionFilter] attributes.


React: Error Boundaries

On the frontend, React’s equivalent of a global exception handler for rendering errors is the Error Boundary. It is a class component (the one remaining valid use case for class components in 2026) that implements componentDidCatch and getDerivedStateFromError. When any component in its subtree throws during rendering, React calls the boundary instead of crashing the entire page.

// TypeScript — React error boundary
// src/components/ErrorBoundary.tsx
import React, { Component, ErrorInfo, ReactNode } from "react";

interface Props {
    fallback: ReactNode | ((error: Error) => ReactNode);
    children: ReactNode;
    onError?: (error: Error, info: ErrorInfo) => void;
}

interface State {
    error: Error | null;
}

export class ErrorBoundary extends Component<Props, State> {
    state: State = { error: null };

    static getDerivedStateFromError(error: Error): State {
        return { error };
    }

    componentDidCatch(error: Error, info: ErrorInfo): void {
        this.props.onError?.(error, info);
        // Sentry.captureException(error, { extra: info }); // see Sentry section
    }

    render(): ReactNode {
        if (this.state.error) {
            const { fallback } = this.props;
            return typeof fallback === "function"
                ? fallback(this.state.error)
                : fallback;
        }
        return this.props.children;
    }
}

// Usage — wrap your application or sections of it
function App() {
    return (
        <ErrorBoundary
            fallback={(err) => (
                <div role="alert">
                    <h2>Something went wrong</h2>
                    <p>{err.message}</p>
                </div>
            )}
            onError={(err, info) => console.error(err, info)}
        >
            <Router />
        </ErrorBoundary>
    );
}

Error boundaries only catch errors that occur during rendering, in lifecycle methods, and in constructors of components in their subtree. They do not catch errors in:

  • Event handlers (use regular try/catch inside the handler)
  • Asynchronous code (setTimeout, fetch callbacks — these happen outside React’s call stack)
  • The error boundary itself

For async data-fetching errors, TanStack Query surfaces them through the error property of its query result, and you render an error state from the component. React 19 introduces an onCaughtError / onUncaughtError callback on createRoot, but error boundaries remain the primary mechanism.


Node.js: Unhandled Rejection and Uncaught Exception Handlers

In .NET, unhandled exceptions in background threads crash the process (unless AppDomain.UnhandledException or TaskScheduler.UnobservedTaskException is wired up). In Node.js, the equivalents are:

// TypeScript / Node.js — process-level error handlers
// src/main.ts or a dedicated setup file

// Equivalent to AppDomain.UnhandledException for async code.
// In Node.js 15+, an unhandled rejection terminates the process by default.
// In older versions it was a warning. Always handle this explicitly.
process.on("unhandledRejection", (reason: unknown, promise: Promise<unknown>) => {
    const error = reason instanceof Error ? reason : new Error(String(reason));
    logger.fatal({ err: error }, "Unhandled promise rejection — process will exit");
    // Give Sentry time to flush before exiting
    Sentry.captureException(error);
    Sentry.flush(2000).then(() => process.exit(1));
});

// Equivalent to AppDomain.UnhandledException for synchronous throws.
// This is your last-resort handler. The process is in an undefined state.
process.on("uncaughtException", (error: Error) => {
    logger.fatal({ err: error }, "Uncaught exception — process will exit");
    Sentry.captureException(error);
    Sentry.flush(2000).then(() => process.exit(1));
});

// Graceful shutdown on SIGTERM (equivalent to Windows Service stop / ASP.NET
// Core application lifetime cancellation token)
process.on("SIGTERM", () => {
    logger.info("SIGTERM received — shutting down gracefully");
    server.close(() => process.exit(0));
});

NestJS registers these handlers for you when you use NestFactory.create, but if you are writing raw Node.js scripts or custom bootstrap code, you need them explicitly.

The critical rule: do not swallow unhandled rejections. The pattern process.on("unhandledRejection", () => {}) — doing nothing — is a production debugging nightmare. Always log and always exit or re-throw. The process is corrupt at that point.


Sentry Integration

Sentry fills the role of Application Insights for error tracking in this stack. The integration hooks directly into your exception filter (NestJS) and error boundary (React).

NestJS

pnpm add @sentry/node @sentry/profiling-node
// TypeScript — Sentry init in main.ts (before everything else)
import * as Sentry from "@sentry/node";
import { nodeProfilingIntegration } from "@sentry/profiling-node";

Sentry.init({
    dsn: process.env.SENTRY_DSN,
    environment: process.env.NODE_ENV,
    integrations: [nodeProfilingIntegration()],
    tracesSampleRate: process.env.NODE_ENV === "production" ? 0.1 : 1.0,
    profilesSampleRate: 1.0,
    beforeSend(event, hint) {
        // Suppress 4xx errors from Sentry — they are expected
        const status = (hint?.originalException as any)?.status;
        if (status && status >= 400 && status < 500) return null;
        return event;
    },
});

In your global exception filter, add Sentry.captureException(exception) for server errors:

// TypeScript — updated GlobalExceptionFilter with Sentry
import * as Sentry from "@sentry/node";

// Inside the catch() method, after resolveError:
if (status >= 500) {
    Sentry.captureException(exception, {
        extra: { path: request.url, method: request.method },
        user: { id: request.user?.id }, // if you have auth context
    });
}

Next.js / React

pnpm add @sentry/nextjs

Sentry’s Next.js SDK wraps the instrumentation.ts file:

// TypeScript — instrumentation.ts (Next.js 15+ / App Router)
export async function register() {
    if (process.env.NEXT_RUNTIME === "nodejs") {
        const Sentry = await import("@sentry/nextjs");
        Sentry.init({
            dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
            tracesSampleRate: 0.1,
        });
    }
}

Wire the error boundary to Sentry:

// TypeScript — ErrorBoundary.tsx with Sentry
import * as Sentry from "@sentry/nextjs";

componentDidCatch(error: Error, info: ErrorInfo): void {
    Sentry.captureException(error, { extra: { componentStack: info.componentStack } });
    this.props.onError?.(error, info);
}

Sentry groups errors by their stack trace fingerprint, deduplicates across users, and shows you the release version and user context. The beforeSend callback is the equivalent of Application Insights’ ITelemetryProcessor — use it to filter noise (4xx errors, bot traffic) before they hit your quota.


Designing Error Hierarchies in TypeScript

Given everything above, here is the recommended hierarchy design for a NestJS + React project:

// TypeScript — error hierarchy
// src/common/errors/index.ts

// ─── Base ────────────────────────────────────────────────────────────────────

export class AppError extends Error {
    constructor(
        message: string,
        public readonly code: string,
        public readonly metadata?: Record<string, unknown>,
    ) {
        super(message);
        Object.setPrototypeOf(this, new.target.prototype);
        this.name = new.target.name;
    }
}

// ─── Domain errors (expected, part of the domain model) ──────────────────────

export class DomainError extends AppError {}

export class NotFoundError extends DomainError {
    constructor(resource: string, id: string | number) {
        super(`${resource} '${id}' not found.`, "NOT_FOUND", { resource, id });
    }
}

export class ConflictError extends DomainError {
    constructor(message: string) {
        super(message, "CONFLICT");
    }
}

export class ValidationError extends DomainError {
    constructor(
        message: string,
        public readonly fields?: Record<string, string[]>,
    ) {
        super(message, "VALIDATION_ERROR", { fields });
    }
}

// ─── Infrastructure errors (unexpected, environment-level failures) ───────────

export class InfrastructureError extends AppError {}

export class DatabaseError extends InfrastructureError {
    constructor(cause: Error) {
        super("A database error occurred.", "DATABASE_ERROR");
        this.cause = cause; // Node.js 16.9+ Error.cause support
    }
}

export class ExternalServiceError extends InfrastructureError {
    constructor(service: string, cause: Error) {
        super(`External service '${service}' failed.`, "EXTERNAL_SERVICE_ERROR", {
            service,
        });
        this.cause = cause;
    }
}

Map this to HTTP status codes in your global exception filter:

// TypeScript — status code mapping
function resolveHttpStatus(error: unknown): number {
    if (error instanceof NotFoundError) return 404;
    if (error instanceof ConflictError) return 409;
    if (error instanceof ValidationError) return 422;
    if (error instanceof DomainError) return 400;
    if (error instanceof InfrastructureError) return 503;
    return 500;
}

This keeps the HTTP concern out of your domain model — your services throw NotFoundError, not NotFoundException. The filter translates at the boundary, which is the same separation you get in ASP.NET with IExceptionHandler.


Key Differences

ConceptC# / ASP.NET CoreTypeScript / NestJS + React
Exception base classSystem.ExceptionError (no ApplicationException analog)
Checked exceptionsNo (unlike Java, C# is unchecked)No
throw type constraintMust be Exception-derivedAny value — you can throw 42
catch type narrowingBy type: catch (NotFoundException ex)Must narrow inside body: if (err instanceof X)
Exception filterswhen (condition) keywordNot in language — use if inside catch
Global exception handlerIExceptionHandler / UseExceptionHandlerNestJS ExceptionFilter with @Catch()
Catch scopeController attribute [TypeFilter]@UseFilters() decorator — same concept
Render error boundaryNo (Blazor has ErrorBoundary component)React ErrorBoundary class component
Unhandled async errorsTaskScheduler.UnobservedTaskExceptionprocess.on("unhandledRejection", ...)
Return-value errorsTryParse, out bool patternResult<T, E>neverthrow, ts-results, or DIY
Error trackingApplication InsightsSentry (@sentry/node, @sentry/nextjs)
AggregateExceptionYes, from Task.WhenAllAggregateError (limited) — Promise.allSettled for granular results

Gotchas for .NET Engineers

Gotcha 1: instanceof Fails Across Module Boundaries

In C#, type identity is determined by the assembly-qualified type name. In JavaScript, instanceof checks the prototype chain — and the prototype is specific to the module instance. If your error classes are instantiated in one bundle chunk and the catch block is in another, instanceof can return false for what looks like the same class.

This is most likely to bite you in:

  • Monorepos where the error classes live in a shared package that gets bundled twice (once into the API, once into a worker)
  • Node.js environments with multiple copies of the same package in node_modules (pnpm’s strict mode prevents this; npm and yarn can allow duplicate installs)
  • Code that uses vm.runInContext or worker threads that don’t share the same module registry

The fix: ensure error classes are a single shared dependency, and use pnpm’s strict mode to prevent duplicate packages. As a defensive measure, you can check error.name or error.code string properties instead of instanceof for errors that cross process boundaries.

Gotcha 2: async Functions Swallow Throws as Rejected Promises

In C#, a throw inside a method propagates synchronously up the call stack, regardless of whether the method is async. In JavaScript, a throw inside an async function does not propagate synchronously — it becomes a rejected Promise. If that promise is not awaited and not .catch()ed, it becomes an unhandled rejection.

// This looks like it throws synchronously, but it does not
async function doWork(): Promise<void> {
    throw new Error("This becomes a rejected Promise, not a synchronous throw");
}

// Caller without await — the error is silently lost in older Node.js
doWork(); // Missing await — in Node 15+, this crashes the process on next tick

// Correct
await doWork(); // error propagates to the caller's catch block

This is the same problem Article 1.7 covers with async/await, but it is especially treacherous in error handling because .NET engineers instinctively assume throw stops execution immediately and propagates up. In an async context in JS, it does stop the function, but the propagation is through the Promise chain, not the synchronous call stack.

Gotcha 3: NestJS’s Default Filter Does Not Catch What You Think

NestJS ships with a default HttpExceptionFilter that handles HttpException and its subclasses. Anything that is NOT an HttpException — including your own custom domain errors — gets caught by NestJS’s internal catch-all, which returns a plain { statusCode: 500, message: "Internal server error" } response and logs the full error internally.

This means: if you throw new OrderNotFoundError(id) from a service and you have not registered a global filter that handles it, the client gets a 500, not a 404. The error is logged to the NestJS internal logger but not sent to Sentry and not translated.

The fix is the global exception filter shown above. Register it before your application starts listening, and test it explicitly — throw a custom domain error from a test endpoint and verify the status code and Sentry capture.

Gotcha 4: Error Boundaries Do Not Catch Event Handler Errors

A React error boundary catches errors thrown during rendering. It does not catch errors thrown inside onClick, onSubmit, or any other DOM event handler. This surprises .NET engineers who expect a top-level handler to catch everything.

// This error is NOT caught by an error boundary above this component
function DeleteButton({ id }: { id: number }) {
    async function handleClick() {
        try {
            await deleteOrder(id); // may throw
        } catch (err) {
            // Must handle here — the error boundary will not see this
            toast.error("Failed to delete order");
        }
    }
    return <button onClick={handleClick}>Delete</button>;
}

Use try/catch inside event handlers. For async mutations, TanStack Query’s useMutation has an onError callback that gives you a consistent place to handle mutation errors without wrapping every mutate() call in try/catch.

Gotcha 5: The Stack Trace Is Often Useless in Production Without Source Maps

In .NET, stack traces reference your original C# source files with line numbers. In production Node.js and browser JavaScript, the code is compiled and minified — stack traces point to mangled variable names and single-line bundles.

Source maps fix this. NestJS with ts-node or SWC in production mode generates source maps. Sentry’s SDK automatically resolves source maps if they are uploaded during deployment. Without source maps, your Sentry errors will show at t.<anonymous> (main.js:1:45821) — useless for debugging.

Configure source map upload in your CI pipeline:

# Example: upload source maps to Sentry during deployment
pnpm dlx @sentry/cli releases files $SENTRY_RELEASE upload-sourcemaps ./dist

Hands-On Exercise

This exercise builds the complete error handling layer for a NestJS order management API. You will implement every pattern from this article: custom error hierarchy, global exception filter, Sentry integration, and a Result-based service method.

Prerequisites: A running NestJS project (from Track 4 exercises, or nest new order-api).

Step 1 — Create the error hierarchy

Create src/common/errors/index.ts with AppError, DomainError, NotFoundError, ValidationError, and DatabaseError exactly as shown in the “Designing Error Hierarchies” section above.

Step 2 — Create a Result type

Install neverthrow: pnpm add neverthrow

Create src/common/result.ts that re-exports Result, ResultAsync, ok, err from neverthrow.

Step 3 — Write an OrderService method using Result

// src/orders/orders.service.ts
import { Injectable } from "@nestjs/common";
import { PrismaService } from "../prisma/prisma.service";
import { ResultAsync, ok, err } from "neverthrow";
import { NotFoundError, DatabaseError } from "../common/errors";
import { toError } from "../common/utils/to-error";

@Injectable()
export class OrdersService {
    constructor(private readonly prisma: PrismaService) {}

    findOne(id: number): ResultAsync<Order, NotFoundError | DatabaseError> {
        return ResultAsync.fromPromise(
            this.prisma.order.findUnique({ where: { id } }),
            (thrown) => new DatabaseError(toError(thrown)),
        ).andThen((row) =>
            row ? ok(mapToOrder(row)) : err(new NotFoundError("Order", id)),
        );
    }
}

Step 4 — Write the controller, throwing instead of returning

In the controller, call the service and translate the Result to an exception or a response. Controllers are your boundary — they should throw (or use NestJS HttpException) rather than return Result:

// src/orders/orders.controller.ts
@Get(":id")
async findOne(@Param("id", ParseIntPipe) id: number) {
    const result = await this.ordersService.findOne(id);
    return result.match(
        (order) => order,
        (error) => { throw error; }, // domain error propagates to global filter
    );
}

Step 5 — Register the global exception filter

Implement and register GlobalExceptionFilter in main.ts as shown in the NestJS section.

Step 6 — Verify behavior

Use curl or the Swagger UI to request an order that does not exist. Verify you receive a 404 with your error shape, not a 500. Introduce a deliberate bug (throw an untyped object) and verify the global filter handles it gracefully and returns a 500 with no stack trace exposed to the client.

Step 7 — Add Sentry (optional but recommended)

Install @sentry/node, initialize in main.ts, and add Sentry.captureException to the global filter for 5xx responses. Trigger a 500 and verify the event appears in your Sentry dashboard with a readable stack trace.


Quick Reference

.NET ConceptTypeScript / NestJS EquivalentNotes
System.ExceptionErrorMinimal built-in hierarchy; extend manually
ApplicationExceptionNo direct equivalentUse your own DomainError extends Error
catch (SpecificException ex)if (err instanceof SpecificError) inside catch (err)No catch (Type) syntax in JS
when (condition) exception filterif (condition) inside catch bodyNot a language feature in JS
IExceptionHandler@Catch() ExceptionFilter + app.useGlobalFilters()Direct equivalent
[TypeFilter(typeof(X))]@UseFilters(X) on controller/handlerSame scoping concept
AggregateExceptionAggregateError / Promise.allSettledallSettled gives per-item results
TaskScheduler.UnobservedTaskExceptionprocess.on("unhandledRejection", ...)Node 15+: crashes process by default
AppDomain.UnhandledExceptionprocess.on("uncaughtException", ...)Last resort; always exit after
TryParse / out bool patternResult<T, E> / neverthrowResultAsync for async operations
Sentry.CaptureException (C# SDK)Sentry.captureException()Same API surface, same Sentry project
Error Boundary (none / Blazor <ErrorBoundary>)React ErrorBoundary class componentCatches render-phase errors only
Application Insights error trackingSentry @sentry/node + @sentry/nextjsSentry = errors; Grafana/Datadog = metrics
throw new NotFoundException(...) (ASP.NET)throw new NotFoundException(...) (NestJS built-in)NestJS has a matching built-in hierarchy
Object.setPrototypeOf workaroundOnly needed when target < ES2015Set "target": "ES2020" or higher and skip it

Further Reading

Configuration & Environment: appsettings.json vs. .env

For .NET engineers who know: IConfiguration, appsettings.json, appsettings.{Environment}.json, user secrets, Azure Key Vault You’ll learn: How Node.js handles configuration through environment variables and .env files, how that maps to the layered provider model you already know, and how validation with Zod replaces the type safety that IConfiguration gives you for free. Time: 10-15 min read


The .NET Way (What You Already Know)

ASP.NET Core’s configuration system is layered and well-integrated into the framework. At startup, WebApplication.CreateBuilder() assembles a configuration pipeline from multiple providers in priority order:

// What the builder does internally — you don't write this, but this is what happens:
// 1. appsettings.json                 (base config, committed to source)
// 2. appsettings.{Environment}.json   (overrides per environment)
// 3. User Secrets                     (development only, ~/.microsoft/usersecrets/)
// 4. Environment variables            (highest priority, used in production)
// 5. Command-line arguments           (highest of all)

// Consuming config is strongly typed through IConfiguration or IOptions<T>:
var builder = WebApplication.CreateBuilder(args);
var dbConnectionString = builder.Configuration.GetConnectionString("DefaultConnection");
var stripeKey = builder.Configuration["Stripe:SecretKey"];

// Or with IOptions<T> for strongly typed sections:
builder.Services.Configure<StripeOptions>(
    builder.Configuration.GetSection("Stripe")
);

Your appsettings.json has hierarchy, and IOptions<T> gives you compile-time safety:

{
  "ConnectionStrings": {
    "DefaultConnection": "Server=localhost;Database=MyApp;..."
  },
  "Stripe": {
    "SecretKey": "sk_test_...",
    "WebhookSecret": "whsec_..."
  },
  "FeatureFlags": {
    "EnableNewCheckout": false
  }
}

User secrets (dotnet user-secrets set) keep sensitive values off disk during development without polluting your committed files. Azure Key Vault handles production secrets with rotation and audit logging.

The key design property here is layering: lower-priority sources set defaults, higher-priority sources override them. Nothing from appsettings.json leaks into production unless you want it to.


The Node.js Way

Node.js has no built-in configuration system. There is no IConfiguration. The runtime gives you one thing: process.env, a flat dictionary of environment variables.

// This is the entirety of Node.js's built-in config support:
const dbUrl = process.env.DATABASE_URL;
const stripeKey = process.env.STRIPE_SECRET_KEY;

// Both are string | undefined. No hierarchy. No type safety. No validation.

That’s it. Everything else — loading from files, validation, hierarchy — is provided by libraries.

.env Files and dotenv

The .env file convention is the community’s solution for local development. You put your environment variables in a file named .env at the project root, and the dotenv library loads them into process.env at startup.

# .env — local development values, NEVER committed to source control
DATABASE_URL="postgresql://postgres:password@localhost:5432/myapp"
STRIPE_SECRET_KEY="sk_test_your_stripe_test_key_here"
STRIPE_WEBHOOK_SECRET="whsec_test_abc123"
CLERK_SECRET_KEY="sk_test_clerk_abc123"
FEATURE_FLAG_NEW_CHECKOUT="false"
NODE_ENV="development"
PORT="3000"

Notice the structure: flat key-value pairs using SCREAMING_SNAKE_CASE. The nested hierarchy you get from appsettings.json (e.g., Stripe:SecretKey) becomes a flat name with underscores.

# .env.example — committed to source control, documents required variables
# Copy this to .env and fill in your values.
DATABASE_URL=""
STRIPE_SECRET_KEY=""
STRIPE_WEBHOOK_SECRET=""
CLERK_SECRET_KEY=""
FEATURE_FLAG_NEW_CHECKOUT="false"
NODE_ENV="development"
PORT="3000"

.env.example is the Node.js equivalent of documenting your required configuration — it tells new developers what they need to fill in. It is committed. .env is not.

Install dotenv and load it as early as possible in your entry point:

// src/main.ts (NestJS) or src/index.ts
import 'dotenv/config'; // must be first import — loads .env into process.env

import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  await app.listen(process.env.PORT ?? 3000);
}
bootstrap();

The dotenv/config import style (rather than calling dotenv.config()) ensures the load happens synchronously before any other module initialization. This matters because modules may read process.env at import time.

NestJS ConfigModule: The IConfiguration Equivalent

Raw process.env is untyped and unvalidated. NestJS provides @nestjs/config to give you a structured, validated, injectable configuration service — the closest thing to IConfiguration in the NestJS world.

First, define a validation schema using Zod (our stack’s choice) or class-validator:

// src/config/env.schema.ts
import { z } from 'zod';

export const envSchema = z.object({
  // Server
  NODE_ENV: z.enum(['development', 'staging', 'production']).default('development'),
  PORT: z.coerce.number().int().positive().default(3000),

  // Database
  DATABASE_URL: z.string().url(),

  // Stripe
  STRIPE_SECRET_KEY: z.string().startsWith('sk_'),
  STRIPE_WEBHOOK_SECRET: z.string().startsWith('whsec_'),

  // Clerk
  CLERK_SECRET_KEY: z.string().startsWith('sk_'),

  // Feature flags — coerce from string to boolean
  FEATURE_FLAG_NEW_CHECKOUT: z
    .string()
    .transform((val) => val === 'true')
    .default('false'),
});

// Infer the TypeScript type from the schema
export type Env = z.infer<typeof envSchema>;

Notice z.coerce.number() for PORT: environment variables are always strings, so you need explicit coercion for non-string types. This is a fundamental difference from appsettings.json where the JSON parser handles type conversion for you.

Now register ConfigModule in your app module with validation:

// src/app.module.ts
import { Module } from '@nestjs/common';
import { ConfigModule } from '@nestjs/config';
import { envSchema } from './config/env.schema';

@Module({
  imports: [
    ConfigModule.forRoot({
      isGlobal: true,           // No need to import ConfigModule in every feature module
      envFilePath: '.env',       // Path to your .env file
      validate: (config) => {
        const result = envSchema.safeParse(config);
        if (!result.success) {
          // Log the specific validation errors and crash at startup
          console.error('Invalid environment configuration:');
          console.error(result.error.format());
          throw new Error('Configuration validation failed');
        }
        return result.data;
      },
    }),
  ],
})
export class AppModule {}

Failing at startup with a clear error message is the right behavior. It is far better to crash immediately with “STRIPE_SECRET_KEY is required” than to start successfully and fail on the first Stripe API call.

Inject ConfigService wherever you need configuration values:

// src/payments/payments.service.ts
import { Injectable } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import type { Env } from '../config/env.schema';
import Stripe from 'stripe';

@Injectable()
export class PaymentsService {
  private readonly stripe: Stripe;

  constructor(private readonly config: ConfigService<Env, true>) {
    // The second generic argument `true` makes get() return a non-nullable type
    this.stripe = new Stripe(this.config.get('STRIPE_SECRET_KEY'));
  }

  async createPaymentIntent(amount: number): Promise<Stripe.PaymentIntent> {
    return this.stripe.paymentIntents.create({ amount, currency: 'usd' });
  }
}

The ConfigService<Env, true> generic gives you type-safe access: this.config.get('STRIPE_SECRET_KEY') returns string, not string | undefined. If you try to get a key that doesn’t exist in Env, TypeScript will complain at compile time.

Next.js Built-in Environment Handling

Next.js has its own configuration system built on top of the same .env convention, with one critical addition: the NEXT_PUBLIC_ prefix.

# .env.local (Next.js project)

# Server-only variables — never exposed to the browser
DATABASE_URL="postgresql://postgres:password@localhost:5432/myapp"
STRIPE_SECRET_KEY="sk_test_..."
CLERK_SECRET_KEY="sk_test_..."

# Client-side variables — prefixed with NEXT_PUBLIC_, bundled into the browser JS
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY="pk_test_..."
NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY="pk_test_..."
NEXT_PUBLIC_APP_URL="http://localhost:3000"

The NEXT_PUBLIC_ prefix is Next.js’s build-time mechanism for safely exposing variables to client-side code. At build time, Next.js replaces references to process.env.NEXT_PUBLIC_* with their literal values in the browser bundle. Variables without the prefix are only available in Server Components, API routes, and middleware.

This distinction maps to a .NET concept you know: server-side values versus configuration exposed in Blazor WebAssembly or bundled JavaScript. The difference is that Next.js enforces the boundary at build time through naming convention rather than through a framework mechanism.

// app/page.tsx — React Server Component (runs on server)
// Both server and public vars are accessible here
const dbUrl = process.env.DATABASE_URL;                        // works
const stripeKey = process.env.STRIPE_SECRET_KEY;              // works
const publishableKey = process.env.NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY; // works

// components/checkout-form.tsx — Client Component ('use client')
// Only NEXT_PUBLIC_ vars are available at runtime
const publishableKey = process.env.NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY; // works
const secretKey = process.env.STRIPE_SECRET_KEY; // undefined — never reaches browser

Next.js also provides environment-specific file loading with a clear priority order:

.env.local          (highest priority, always gitignored)
.env.development    (loaded when NODE_ENV=development)
.env.production     (loaded when NODE_ENV=production)
.env                (lowest priority, can be committed for non-secret defaults)

This is the layered model you recognize from appsettings.json — base config at the bottom, environment-specific overrides above it, local overrides at the top.

For type safety in Next.js, you can use the same Zod validation pattern at the module level:

// src/lib/env.ts (Next.js project)
import { z } from 'zod';

const serverEnvSchema = z.object({
  DATABASE_URL: z.string().url(),
  STRIPE_SECRET_KEY: z.string().startsWith('sk_'),
  CLERK_SECRET_KEY: z.string(),
  NODE_ENV: z.enum(['development', 'test', 'production']).default('development'),
});

const clientEnvSchema = z.object({
  NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY: z.string().startsWith('pk_'),
  NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY: z.string().startsWith('pk_'),
  NEXT_PUBLIC_APP_URL: z.string().url(),
});

// Validate server env only when running on the server
export const serverEnv = typeof window === 'undefined'
  ? serverEnvSchema.parse(process.env)
  : ({} as z.infer<typeof serverEnvSchema>); // never accessed client-side

// Validate client env everywhere
export const clientEnv = clientEnvSchema.parse({
  NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY: process.env.NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY,
  NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY: process.env.NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY,
  NEXT_PUBLIC_APP_URL: process.env.NEXT_PUBLIC_APP_URL,
});

The typeof window === 'undefined' check guards server-only validation from running in the browser. Libraries like t3-env (from the T3 stack) provide a more ergonomic wrapper around this pattern if you find yourself repeating it.

How Render Handles Environment Variables

On Render, environment variables are set through the dashboard or the render.yaml blueprint file. There are no .env files in production — the platform injects variables directly into process.env at runtime.

# render.yaml — Infrastructure as code for Render
services:
  - type: web
    name: my-api
    env: node
    buildCommand: pnpm install && pnpm build
    startCommand: pnpm start
    envVars:
      - key: NODE_ENV
        value: production
      - key: DATABASE_URL
        fromDatabase:
          name: my-postgres
          property: connectionString
      - key: STRIPE_SECRET_KEY
        sync: false   # Must be set manually in the Render dashboard — not stored in yaml
      - key: CLERK_SECRET_KEY
        sync: false   # Same — sensitive values are never in source control

sync: false tells Render that the value is not in the yaml file and must be configured manually in the dashboard. This is the equivalent of Azure Key Vault references in your ARM templates — a pointer without the secret itself.

Render provides a feature called Secret Files for configuration that must be a file (e.g., service account JSON for Google APIs). This is less common but useful to know.


Key Differences

Concept.NET (ASP.NET Core)Node.js / Next.js / NestJS
Base configurationappsettings.json (JSON, hierarchical).env file (flat key=value pairs)
Environment overridesappsettings.{Environment}.json.env.development, .env.production
Local dev secretsdotnet user-secrets (outside project dir).env.local or .env (gitignored)
Production secretsAzure Key Vault / environment variablesRender env vars / secrets manager
Access mechanismIConfiguration / IOptions<T> (DI)process.env (global) or ConfigService (DI)
Type safetyCompile-time via IOptions<T>Runtime via Zod schema validation
Validation failureStartup exceptionStartup crash (with Zod + ConfigModule)
Hierarchy supportYes — layered providersMinimal — file priority order only
Client vs. server separationFramework-level (Blazor, Razor)NEXT_PUBLIC_ prefix convention (Next.js)
Injection patternIOptions<StripeOptions> in constructorConfigService<Env, true> in constructor
Config hierarchy separator: (e.g., Stripe:SecretKey)__ or just flatten (e.g., STRIPE_SECRET_KEY)

The biggest structural difference: .NET’s IConfiguration gives you type safety through the type system (C# classes bound to configuration sections). In Node.js, you earn that type safety at runtime through schema validation with Zod. The end result — crash on misconfiguration, typed access in code — is the same, but the mechanism is different.


Gotchas for .NET Engineers

1. process.env is always strings, and there is no automatic type conversion.

In appsettings.json, you write "EnableNewCheckout": false and IOptions<FeatureFlags> gives you a bool. In process.env, everything is a string. process.env.FEATURE_FLAG_NEW_CHECKOUT is "false" — the string — not the boolean. This bites .NET engineers constantly:

// Wrong — this will ALWAYS be truthy because non-empty strings are truthy in JS
if (process.env.FEATURE_FLAG_NEW_CHECKOUT) {
  // This runs even when the value is "false"
}

// Correct — explicit comparison or Zod coercion
if (process.env.FEATURE_FLAG_NEW_CHECKOUT === 'true') { ... }

// Best — use Zod to coerce at the boundary so the rest of your code gets a boolean
z.string().transform((val) => val === 'true')

The same applies to numbers. process.env.PORT is "3000", not 3000. process.env.PORT + 1 is "30001", not 3001.

2. dotenv does NOT override existing environment variables.

If DATABASE_URL is already set in the shell environment before your app starts, dotenv will not overwrite it with the value from your .env file. This is intentional and correct behavior for production (where real env vars should win), but it catches developers off guard locally:

# If you have DATABASE_URL set in your shell profile or CI environment:
export DATABASE_URL="postgresql://prod-server/myapp"

# And your .env has:
DATABASE_URL="postgresql://localhost/myapp_dev"

# dotenv will NOT overwrite — your app connects to prod. This is a bad day.

You can force an override with dotenv.config({ override: true }), but do this with caution and only in development contexts.

3. The .env file must never be committed, and recovery requires more than .gitignore.

In .NET, user secrets live in ~/.microsoft/usersecrets/ — outside the project directory entirely, so it’s physically impossible to commit them. .env files live in your project root, next to .gitignore. If someone adds .env to .gitignore after committing it, the file is still in git history.

# If .env was ever committed, gitignore is not enough. You must:
git filter-branch --force --index-filter \
  'git rm --cached --ignore-unmatch .env' \
  --prune-empty --tag-name-filter cat -- --all

# And immediately rotate every secret that was in that file.

Prevention is straightforward: add .env to your global gitignore (~/.gitignore_global) so it is never committed in any project:

echo ".env" >> ~/.gitignore_global
git config --global core.excludesFile ~/.gitignore_global

Many teams also add a pre-commit hook that scans for .env files or common secret patterns using secretlint or detect-secrets. This is the equivalent of what Azure DevOps’ secret scanning does automatically.

4. There is no equivalent of IOptions<T> reload on file change without extra setup.

ASP.NET Core’s IOptionsMonitor<T> reloads configuration when appsettings.json changes on disk. process.env is populated once at startup and does not re-read the .env file at runtime. If you change a .env value, you must restart the process. This is rarely a problem in practice — but if you’re building something that needs live config reloads, you’ll need to implement it explicitly.

5. NestJS ConfigModule’s validate function receives raw process.env, including all system variables.

When you call envSchema.parse(config) in the validate callback, config is the entire process.env merged with your .env file — hundreds of variables including PATH, HOME, USER, etc. Use z.object().strip() (the Zod default) rather than z.object().strict(), or your validation will fail on system variables you didn’t declare:

// This will fail — strict() rejects unknown keys, and there are many
const envSchema = z.object({ DATABASE_URL: z.string() }).strict();

// This is correct — strip() (default) ignores undeclared keys
const envSchema = z.object({ DATABASE_URL: z.string() });

6. Environment variable naming conventions collide with .NET’s hierarchy separator.

IConfiguration uses : to navigate hierarchy: Stripe:SecretKey. When environment variables override appsettings.json values in .NET, they use __ as the hierarchy separator: Stripe__SecretKey. In Node.js, there is no hierarchy — you just flatten everything: STRIPE_SECRET_KEY. This isn’t a bug, but engineers porting configuration from .NET to Node.js sometimes create confusing double-underscore variable names that don’t mean anything in the new context.


Hands-On Exercise

Convert the following appsettings.json structure to a fully validated .env + Zod setup for a NestJS application. The goal is a ConfigService that provides typed access to every value, with the application crashing at startup if any required variable is missing or malformed.

Starting point — appsettings.json:

{
  "ConnectionStrings": {
    "DefaultConnection": "Server=localhost;Database=SchoolVision;User Id=sa;Password=..."
  },
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
    }
  },
  "Clerk": {
    "SecretKey": "sk_test_...",
    "PublishableKey": "pk_test_..."
  },
  "Stripe": {
    "SecretKey": "sk_test_...",
    "WebhookSecret": "whsec_...",
    "PriceIds": {
      "MonthlyPlan": "price_...",
      "AnnualPlan": "price_..."
    }
  },
  "Storage": {
    "BucketName": "my-bucket",
    "Region": "us-east-1",
    "AccessKeyId": "AKIA...",
    "SecretAccessKey": "..."
  }
}

What to produce:

  1. A .env.example file with all required variable names (empty values)
  2. A .env file with local development values (filled in, not committed)
  3. A src/config/env.schema.ts with a Zod schema validating every variable
  4. The ConfigModule.forRoot() registration in AppModule using your schema
  5. One example service that injects ConfigService<Env, true> and uses a typed value

Verify your work by:

  • Renaming .env to .env.bak and starting the app — it should crash with a clear error listing missing variables
  • Setting PORT=abc and starting the app — it should crash with “Expected number, received nan” or similar
  • Running tsc --noEmit — there should be no type errors on config.get(...) calls

Migration guide — flattening the hierarchy:

# appsettings.json key              → .env variable name
ConnectionStrings:DefaultConnection → DATABASE_URL
Clerk:SecretKey                     → CLERK_SECRET_KEY
Clerk:PublishableKey                → NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY  (if Next.js)
Stripe:SecretKey                    → STRIPE_SECRET_KEY
Stripe:WebhookSecret                → STRIPE_WEBHOOK_SECRET
Stripe:PriceIds:MonthlyPlan         → STRIPE_PRICE_MONTHLY
Stripe:PriceIds:AnnualPlan          → STRIPE_PRICE_ANNUAL
Storage:BucketName                  → S3_BUCKET_NAME
Storage:Region                      → AWS_REGION
Storage:AccessKeyId                 → AWS_ACCESS_KEY_ID
Storage:SecretAccessKey             → AWS_SECRET_ACCESS_KEY
Logging:LogLevel:Default            → LOG_LEVEL  (default: "info")

Quick Reference

.NET conceptNode.js / NestJS equivalent
appsettings.json.env file (loaded by dotenv)
appsettings.Production.json.env.production (Next.js) or Render env vars
appsettings.Development.json.env.development or .env.local
dotnet user-secrets setEdit .env locally (it’s gitignored)
Azure Key VaultRender env vars marked sync: false
IConfigurationprocess.env (raw) or ConfigService (DI)
IOptions<StripeOptions>ConfigService<Env, true> (typed)
IOptionsMonitor<T> (live reload)No equivalent — restart required
Data Annotations on configZod schema in envSchema
builder.Configuration.GetConnectionString("DefaultConnection")configService.get('DATABASE_URL')
Stripe:SecretKey (hierarchy separator :)STRIPE_SECRET_KEY (flat, underscore)
Stripe__SecretKey (env var override separator)STRIPE_SECRET_KEY (same format either way)
Crash on missing required configenvSchema.parse() throws at startup
ASPNETCORE_ENVIRONMENTNODE_ENV
bool / int config valuesAlways string in .env — coerce with Zod
[Required] on config propertyz.string() (no .optional())
Allowed config file committed.env.example (empty values, documents requirements)
Config file that must NOT be committed.env (add to .gitignore globally)

dotenv and NestJS ConfigModule commands:

# Install dependencies
pnpm add @nestjs/config dotenv zod

# Verify a variable is loaded
node -e "require('dotenv').config(); console.log(process.env.DATABASE_URL)"

# Check which .env file Next.js is loading
NEXT_PUBLIC_DEBUG=1 pnpm dev

# Audit your .env.example vs .env for missing keys
diff <(grep -v '^#' .env.example | cut -d= -f1 | sort) \
     <(grep -v '^#' .env | cut -d= -f1 | sort)

Further Reading

  • NestJS Configuration documentation — the official guide for ConfigModule, ConfigService, and custom configuration namespaces
  • Next.js Environment Variables — covers .env file priority, NEXT_PUBLIC_ prefix, and runtime vs. build-time availability
  • Zod documentation — the schema library used throughout this stack for validation; the “coercion” section is particularly relevant for environment variable parsing
  • t3-env — a thin wrapper around Zod for Next.js and server-side environment validation; worth reading as a reference implementation of the patterns in this article

Testing Philosophy: xUnit/NUnit vs. Jest/Vitest

For .NET engineers who know: xUnit, NUnit, Moq, test projects, [Fact], [Theory], FluentAssertions You’ll learn: How JS/TS testing maps to your existing mental model — and the one place it genuinely has no equivalent Time: 15-20 min read


The .NET Way (What You Already Know)

In .NET, tests live in separate projects. Your OrderService in MyApp.Core gets a companion MyApp.Core.Tests project, with a <ProjectReference> pointing back to production code. The test runner discovers tests through attributes: [Fact] marks a single test, [Theory] with [InlineData] parameterizes it. Setup and teardown are handled via the constructor (runs before each test) and IDisposable.Dispose() (runs after). For shared setup across a class, IClassFixture<T> gives you a single instance; for shared state across multiple classes, ICollectionFixture<T> coordinates it.

Mocking is a separate library concern. Moq intercepts interfaces, letting you stub return values with Setup() and verify call behavior with Verify(). The test project installs Moq via NuGet; production code never touches it.

// MyApp.Core.Tests/OrderServiceTests.cs
public class OrderServiceTests : IDisposable
{
    private readonly Mock<IOrderRepository> _mockRepo;
    private readonly Mock<IEmailService> _emailService;
    private readonly OrderService _sut;

    public OrderServiceTests()
    {
        _mockRepo = new Mock<IOrderRepository>();
        _emailService = new Mock<IEmailService>();
        _sut = new OrderService(_mockRepo.Object, _emailService.Object);
    }

    [Fact]
    public async Task PlaceOrder_ValidOrder_ReturnsOrderId()
    {
        // Arrange
        var order = new Order { ProductId = 1, Quantity = 2 };
        _mockRepo.Setup(r => r.SaveAsync(order)).ReturnsAsync(42);

        // Act
        var result = await _sut.PlaceOrderAsync(order);

        // Assert
        Assert.Equal(42, result);
        _emailService.Verify(e => e.SendConfirmationAsync(42), Times.Once);
    }

    [Theory]
    [InlineData(0)]
    [InlineData(-1)]
    public async Task PlaceOrder_InvalidQuantity_Throws(int quantity)
    {
        var order = new Order { ProductId = 1, Quantity = quantity };
        await Assert.ThrowsAsync<ArgumentException>(() => _sut.PlaceOrderAsync(order));
    }

    public void Dispose()
    {
        // cleanup if needed
    }
}

Coverage is measured by dotnet test --collect:"XPlat Code Coverage", reported as Cobertura XML, and visualized in Azure DevOps or via ReportGenerator. The CI pipeline is explicit: build the test project, run it, fail if coverage drops below threshold.


The Vitest Way

Test File Location: No Separate Project

The first thing to internalize: there is no test project. Tests live alongside the code they test.

src/
  services/
    order.service.ts          ← production code
    order.service.test.ts     ← tests, right here
    order.service.spec.ts     ← also valid, same thing

Both *.test.ts and *.spec.ts are valid conventions — spec comes from the BDD world, test is more common in utility code. Pick one per project and be consistent. We use *.test.ts for unit tests and *.spec.ts for integration tests to make them visually distinct in large codebases.

Vitest finds test files by scanning for these patterns automatically — no <ProjectReference>, no build target, no separate csproj. The tradeoff: test code is closer to production code (faster feedback loop, easier navigation), but you need your build process to exclude test files from production bundles. Vitest handles this; the TypeScript compiler is typically configured to ignore *.test.ts files in tsconfig.app.json while tsconfig.test.json includes them.

The describe/it/expect API

Where xUnit uses class structure to group tests, Vitest uses describe blocks. Where [Fact] marks a test, it (or test — they’re identical) marks one. Where xUnit has assertion methods on Assert.*, Vitest chains expectations from expect().

xUnit/NUnitVitest equivalent
Test classdescribe() block
[Fact] / [Test]it() or test()
[Theory] / [InlineData]it.each() / test.each()
Assert.Equal(expected, actual)expect(actual).toBe(expected)
Assert.True(expr)expect(expr).toBe(true)
Assert.ThrowsAsync<T>()await expect(fn()).rejects.toThrow()
Assert.NotNull(obj)expect(obj).not.toBeNull()
Assert.Contains(item, collection)expect(collection).toContain(item)
FluentAssertions: result.Should().BeEquivalentTo(expected)expect(result).toEqual(expected)

One note on toBe vs. toEqual: toBe uses Object.is — reference equality, like ReferenceEquals() in C#. toEqual does a deep structural comparison, like Assert.Equivalent() in xUnit or Should().BeEquivalentTo() in FluentAssertions. You will almost always want toEqual for objects, and toBe for primitives.

Setup and Teardown

Constructor/Dispose maps directly to beforeEach/afterEach. The scoping works the same way: beforeEach inside a describe block runs before each test in that block only.

.NET xUnit lifecycleVitest equivalent
ConstructorbeforeEach()
IDisposable.Dispose()afterEach()
IClassFixture<T> (once per class)beforeAll() / afterAll()
ICollectionFixture<T> (once per collection)beforeAll() in outer describe
// setup/teardown mapping
describe("OrderService", () => {
  let sut: OrderService;
  let mockRepo: MockedObject<OrderRepository>;

  beforeEach(() => {
    // runs before each test — same as xUnit constructor
    mockRepo = createMockRepo();
    sut = new OrderService(mockRepo);
  });

  afterEach(() => {
    // runs after each test — same as Dispose()
    vi.restoreAllMocks();
  });

  beforeAll(() => {
    // runs once before all tests in this describe — same as IClassFixture
  });

  afterAll(() => {
    // runs once after all tests in this describe
  });
});

Mocking: Built-In, No Separate Library

Moq requires an interface, a mock object, and a setup call per method. Vitest’s mocking is built into the framework and works differently: you mock at the module level, not the interface level.

vi.fn() creates a mock function (equivalent to Mock<IService>().Setup(s => s.Method())):

const mockSave = vi.fn().mockResolvedValue(42);

vi.spyOn() wraps an existing method on an object, letting you track calls without replacing the implementation (unless you want to). Equivalent to Moq’s CallBase behavior:

const spy = vi.spyOn(emailService, 'sendConfirmation').mockResolvedValue(undefined);

vi.mock() replaces an entire module — no equivalent in the .NET world because .NET doesn’t have a module system at that level. More on this in the Gotchas section.

Verifying calls uses the mock’s own properties rather than a separate Verify() call:

// Moq: _emailService.Verify(e => e.SendConfirmationAsync(42), Times.Once);
expect(spy).toHaveBeenCalledTimes(1);
expect(spy).toHaveBeenCalledWith(42);

// Or, checking the most recent call's arguments
expect(spy).toHaveBeenLastCalledWith(42);

Parameterized Tests: it.each

[Theory] + [InlineData] maps to it.each. Two syntaxes:

// Array of arrays — each inner array is one test case
it.each([
  [0, "zero"],
  [-1, "negative"],
  [-999, "large negative"],
])("rejects quantity %i (%s)", async (quantity, _description) => {
  await expect(sut.placeOrder({ productId: 1, quantity }))
    .rejects.toThrow(ValidationError);
});

// Tagged template literal — more readable for named params
it.each`
  quantity | description
  ${0}     | ${"zero"}
  ${-1}    | ${"negative"}
`("rejects $description quantity", async ({ quantity }) => {
  await expect(sut.placeOrder({ productId: 1, quantity }))
    .rejects.toThrow(ValidationError);
});

Side-by-Side: Testing the Same Service

Here is the same OrderService tested in both languages, demonstrating every concept above in parallel.

The service under test

// C# — OrderService.cs
public class OrderService
{
    private readonly IOrderRepository _repository;
    private readonly IEmailService _emailService;

    public OrderService(IOrderRepository repository, IEmailService emailService)
    {
        _repository = repository;
        _emailService = emailService;
    }

    public async Task<int> PlaceOrderAsync(Order order)
    {
        if (order.Quantity <= 0)
            throw new ArgumentException("Quantity must be positive", nameof(order));

        var orderId = await _repository.SaveAsync(order);
        await _emailService.SendConfirmationAsync(orderId);
        return orderId;
    }

    public async Task<IReadOnlyList<Order>> GetOrdersAsync(int userId)
    {
        return await _repository.GetByUserAsync(userId);
    }
}
// TypeScript — order.service.ts
export class OrderService {
  constructor(
    private readonly repository: OrderRepository,
    private readonly emailService: EmailService,
  ) {}

  async placeOrder(order: Order): Promise<number> {
    if (order.quantity <= 0) {
      throw new ValidationError("Quantity must be positive");
    }

    const orderId = await this.repository.save(order);
    await this.emailService.sendConfirmation(orderId);
    return orderId;
  }

  async getOrders(userId: number): Promise<Order[]> {
    return this.repository.getByUser(userId);
  }
}

The tests

// C# — OrderServiceTests.cs (xUnit + Moq)
public class OrderServiceTests : IDisposable
{
    private readonly Mock<IOrderRepository> _mockRepo;
    private readonly Mock<IEmailService> _mockEmail;
    private readonly OrderService _sut;

    public OrderServiceTests()
    {
        _mockRepo = new Mock<IOrderRepository>();
        _mockEmail = new Mock<IEmailService>();
        _sut = new OrderService(_mockRepo.Object, _mockEmail.Object);
    }

    // --- PlaceOrder tests ---

    [Fact]
    public async Task PlaceOrder_ValidOrder_SavesAndSendsEmail()
    {
        // Arrange
        var order = new Order { ProductId = 1, Quantity = 3 };
        _mockRepo.Setup(r => r.SaveAsync(order)).ReturnsAsync(99);
        _mockEmail.Setup(e => e.SendConfirmationAsync(99)).Returns(Task.CompletedTask);

        // Act
        var result = await _sut.PlaceOrderAsync(order);

        // Assert
        Assert.Equal(99, result);
        _mockEmail.Verify(e => e.SendConfirmationAsync(99), Times.Once);
    }

    [Theory]
    [InlineData(0)]
    [InlineData(-1)]
    [InlineData(-100)]
    public async Task PlaceOrder_InvalidQuantity_ThrowsArgumentException(int quantity)
    {
        var order = new Order { ProductId = 1, Quantity = quantity };

        var ex = await Assert.ThrowsAsync<ArgumentException>(
            () => _sut.PlaceOrderAsync(order)
        );
        Assert.Contains("Quantity", ex.Message);

        // Verify nothing was saved
        _mockRepo.Verify(r => r.SaveAsync(It.IsAny<Order>()), Times.Never);
    }

    // --- GetOrders tests ---

    [Fact]
    public async Task GetOrders_ReturnsUserOrders()
    {
        // Arrange
        var expected = new List<Order>
        {
            new() { Id = 1, UserId = 5 },
            new() { Id = 2, UserId = 5 },
        };
        _mockRepo.Setup(r => r.GetByUserAsync(5)).ReturnsAsync(expected);

        // Act
        var result = await _sut.GetOrdersAsync(5);

        // Assert
        Assert.Equal(2, result.Count);
        Assert.All(result, o => Assert.Equal(5, o.UserId));
    }

    public void Dispose()
    {
        // nothing to dispose in this test, but the pattern is here
    }
}
// TypeScript — order.service.test.ts (Vitest)
import { describe, it, expect, vi, beforeEach, afterEach } from "vitest";
import { OrderService } from "./order.service";
import { ValidationError } from "../errors";
import type { OrderRepository } from "./order.repository";
import type { EmailService } from "../email/email.service";

describe("OrderService", () => {
  let sut: OrderService;
  let mockRepo: { save: ReturnType<typeof vi.fn>; getByUser: ReturnType<typeof vi.fn> };
  let mockEmail: { sendConfirmation: ReturnType<typeof vi.fn> };

  beforeEach(() => {
    mockRepo = {
      save: vi.fn(),
      getByUser: vi.fn(),
    };
    mockEmail = {
      sendConfirmation: vi.fn(),
    };
    sut = new OrderService(
      mockRepo as unknown as OrderRepository,
      mockEmail as unknown as EmailService,
    );
  });

  afterEach(() => {
    vi.restoreAllMocks();
  });

  // --- placeOrder tests ---

  describe("placeOrder", () => {
    it("saves the order and sends a confirmation email", async () => {
      // Arrange
      const order = { productId: 1, quantity: 3 };
      mockRepo.save.mockResolvedValue(99);
      mockEmail.sendConfirmation.mockResolvedValue(undefined);

      // Act
      const result = await sut.placeOrder(order);

      // Assert
      expect(result).toBe(99);
      expect(mockEmail.sendConfirmation).toHaveBeenCalledTimes(1);
      expect(mockEmail.sendConfirmation).toHaveBeenCalledWith(99);
    });

    it.each([
      [0],
      [-1],
      [-100],
    ])("throws ValidationError for quantity %i", async (quantity) => {
      const order = { productId: 1, quantity };

      await expect(sut.placeOrder(order)).rejects.toThrow(ValidationError);
      await expect(sut.placeOrder(order)).rejects.toThrow("Quantity must be positive");

      // Verify nothing was saved
      expect(mockRepo.save).not.toHaveBeenCalled();
    });
  });

  // --- getOrders tests ---

  describe("getOrders", () => {
    it("returns orders for the given user", async () => {
      // Arrange
      const expected = [
        { id: 1, userId: 5 },
        { id: 2, userId: 5 },
      ];
      mockRepo.getByUser.mockResolvedValue(expected);

      // Act
      const result = await sut.getOrders(5);

      // Assert
      expect(result).toHaveLength(2);
      expect(result.every((o) => o.userId === 5)).toBe(true);
    });
  });
});

Snapshot Testing (No .NET Equivalent)

Snapshot testing is the one Vitest feature with no .NET analog. It serializes the output of a function — usually a rendered UI component, but any serializable value works — to a .snap file on first run. Subsequent runs compare against the saved snapshot and fail if anything changed.

// First run: creates __snapshots__/api-response.test.ts.snap
it("returns the expected order shape", async () => {
  const result = await sut.placeOrder({ productId: 1, quantity: 2 });
  expect(result).toMatchSnapshot();
});

// If you change the return shape, the test fails with a diff.
// To accept the new output as correct: vitest --update-snapshots

Snapshots are most valuable for:

  • UI component output (rendered HTML/JSX)
  • API response shapes you want to detect accidental changes to
  • Complex serialized structures where writing toEqual assertions would be tedious

The downside: snapshots can become a rubber stamp that developers update without reviewing. Treat a snapshot update PR the same way you’d treat a schema migration — verify the diff is intentional.


Test Configuration

In .NET, xunit.runner.json configures test runner behavior and you edit *.csproj properties for parallelism. In Vitest, configuration lives in vitest.config.ts (or inline in vite.config.ts):

// vitest.config.ts
import { defineConfig } from "vitest/config";
import tsconfigPaths from "vite-tsconfig-paths";

export default defineConfig({
  plugins: [tsconfigPaths()],  // resolves path aliases from tsconfig
  test: {
    globals: true,             // removes need to import describe/it/expect in every file
    environment: "node",       // or "jsdom" for React component tests
    coverage: {
      provider: "v8",          // or "istanbul"
      reporter: ["text", "lcov", "html"],
      thresholds: {
        lines: 80,
        branches: 75,
        functions: 80,
        statements: 80,
      },
      exclude: [
        "node_modules",
        "**/*.test.ts",
        "**/*.spec.ts",
        "**/index.ts",         // barrel files
      ],
    },
    setupFiles: ["./src/test/setup.ts"],  // global test setup — like TestInitialize
  },
});

The globals: true option is worth highlighting. Without it, you must import describe, it, expect, and vi at the top of every test file — the same boilerplate every time. With globals: true, they’re available everywhere, matching how Jest traditionally works. Our projects enable this; you’ll see test files without imports and that is intentional.


Coverage Reporting

# Run tests with coverage
pnpm vitest run --coverage

# Output: text summary in terminal + HTML report in ./coverage/
 % Coverage report from v8
 File               | % Stmts | % Branch | % Funcs | % Lines
--------------------|---------|----------|---------|--------
 order.service.ts   |   94.12 |    83.33 |   100.0 |   94.12

Coverage integrates with SonarCloud via the LCOV reporter (see Article 7.2) and with GitHub Actions to post coverage summaries on PRs. The HTML report at ./coverage/index.html shows line-by-line coverage with branch indicators — equivalent to Visual Studio’s test coverage highlighting.


Our Testing Strategy

Three tiers, each with a different tool:

Unit Tests — Vitest

Tests for individual services, utilities, and pure functions. Dependencies are mocked. These run in milliseconds and should make up the majority of your test suite.

pnpm vitest run          # run once
pnpm vitest              # watch mode — runs affected tests on file change
pnpm vitest --coverage   # with coverage

Integration Tests — Vitest + Supertest

Tests for NestJS API endpoints — the real request pipeline (middleware, guards, pipes, serialization) against a real database running in Docker. Supertest makes HTTP requests to your NestJS application without starting a network listener.

// order.integration.spec.ts
import { Test } from "@nestjs/testing";
import { INestApplication } from "@nestjs/common";
import request from "supertest";
import { AppModule } from "../app.module";

describe("Order API (integration)", () => {
  let app: INestApplication;

  beforeAll(async () => {
    const module = await Test.createTestingModule({
      imports: [AppModule],
    }).compile();

    app = module.createNestApplication();
    await app.init();
  });

  afterAll(async () => {
    await app.close();
  });

  it("POST /orders creates an order", async () => {
    const response = await request(app.getHttpServer())
      .post("/orders")
      .send({ productId: 1, quantity: 2 })
      .set("Authorization", `Bearer ${testToken}`)
      .expect(201);

    expect(response.body).toMatchObject({
      id: expect.any(Number),
      productId: 1,
      quantity: 2,
    });
  });
});

This is the JS equivalent of WebApplicationFactory<Program> in ASP.NET integration tests — same concept, slightly different wiring. See Article 5.7 for database setup with Testcontainers.

E2E Tests — Playwright

Tests for complete user flows through the browser. Playwright drives Chromium, Firefox, or WebKit, and its API reads like a well-typed Selenium without the friction.

// tests/e2e/place-order.spec.ts
import { test, expect } from "@playwright/test";

test("user can place an order", async ({ page }) => {
  await page.goto("/login");
  await page.fill('[name="email"]', "test@example.com");
  await page.fill('[name="password"]', "secret");
  await page.click('[type="submit"]');

  await page.goto("/products/1");
  await page.fill('[name="quantity"]', "2");
  await page.click("text=Add to Cart");
  await page.click("text=Checkout");

  await expect(page.locator(".order-confirmation")).toBeVisible();
  await expect(page.locator(".order-id")).toContainText(/ORDER-\d+/);
});
pnpm playwright test            # run all E2E tests
pnpm playwright test --ui       # interactive UI mode (closest to Test Explorer)
pnpm playwright test --debug    # step-through debugging with browser visible

Key Differences

ConcernxUnit + Moq (.NET)Vitest (TypeScript)
Test discoveryAttributes ([Fact], [Test])File naming (*.test.ts, *.spec.ts)
Test file locationSeparate test projectAlongside production code
Test groupingClassdescribe() block
Single test[Fact] / [Test]it() or test()
Parameterized test[Theory] + [InlineData]it.each()
Setup (per test)ConstructorbeforeEach()
Teardown (per test)IDisposable.Dispose()afterEach()
Setup (per class)IClassFixture<T>beforeAll()
MockingMoq (separate NuGet package)vi.fn() / vi.spyOn() (built-in)
Module mockingNot applicablevi.mock()
Call verificationVerify() with Times.*toHaveBeenCalledTimes()
Deep equalityAssert.Equivalent() / BeEquivalentTo()toEqual()
Reference equalityAssert.Same()toBe()
Exception assertionAssert.ThrowsAsync<T>()expect(fn()).rejects.toThrow()
Snapshot testingNo equivalenttoMatchSnapshot()
Coverage toolcoverlet@vitest/coverage-v8
Config filexunit.runner.json / .csprojvitest.config.ts

Gotchas for .NET Engineers

1. vi.mock() hoists to the top of the file — and it’s surprising

vi.mock() calls are automatically moved to the top of the file by Vitest, before any imports. This means you cannot use variables defined in your test file inside a vi.mock() factory function — they haven’t been initialized yet.

// THIS WILL NOT WORK as expected
const mockSave = vi.fn().mockResolvedValue(42);  // defined here

vi.mock("./order.repository", () => ({
  OrderRepository: vi.fn().mockImplementation(() => ({
    save: mockSave,  // ERROR: mockSave is undefined at hoist time
  })),
}));
// CORRECT: use vi.fn() inside the factory, configure in beforeEach
vi.mock("./order.repository", () => ({
  OrderRepository: vi.fn().mockImplementation(() => ({
    save: vi.fn(),
  })),
}));

describe("OrderService", () => {
  beforeEach(() => {
    // Access the mocked module to configure behavior per test
    const { OrderRepository } = await import("./order.repository");
    vi.mocked(OrderRepository).mockImplementation(() => ({
      save: vi.fn().mockResolvedValue(42),
    }));
  });
});

In practice, you often avoid vi.mock() entirely for services you control by injecting mock objects directly via the constructor — the same pattern as Moq. vi.mock() is most useful for third-party modules and Node.js built-ins you can’t inject.

2. Forgetting await on async expect — the test passes when it should fail

This is the single most common Vitest mistake for engineers coming from any background:

// WRONG: the assertion is a Promise, never awaited — test passes regardless
it("throws on invalid input", () => {
  expect(sut.placeOrder({ quantity: 0 })).rejects.toThrow();
  //                                      ^ no await — this Promise is ignored
});

// CORRECT
it("throws on invalid input", async () => {
  await expect(sut.placeOrder({ quantity: 0 })).rejects.toThrow();
});

In .NET, Assert.ThrowsAsync<T>() forces you to await it because it returns a Task<T>. Vitest’s expect(promise).rejects.toThrow() is just a fluent chain that returns a Promise — and if you forget await, Vitest sees no failed assertion, marks the test green, and moves on. Always await assertions against Promises.

3. Module mocking scope is file-wide, not test-wide

When you call vi.mock(), it affects every test in the file. If you need different behavior for different tests, configure the mock in beforeEach rather than at the module level, and use vi.resetAllMocks() or vi.restoreAllMocks() in afterEach to clean up between tests.

A common pattern for different per-test behavior:

vi.mock("./email.service");

import { EmailService } from "./email.service";
const MockedEmailService = vi.mocked(EmailService);

describe("OrderService", () => {
  afterEach(() => {
    vi.clearAllMocks();  // clears call history, keeps implementations
  });

  it("sends email on success", async () => {
    MockedEmailService.prototype.sendConfirmation.mockResolvedValue(undefined);
    // ...
  });

  it("still saves order if email fails", async () => {
    MockedEmailService.prototype.sendConfirmation.mockRejectedValue(new Error("SMTP down"));
    // ...
  });
});

4. toBe vs. toEqual is not like == vs. Equals()

Coming from C#, you might assume toBe is value equality and toEqual is reference equality — the opposite is true. toBe uses Object.is(), which is reference equality for objects. toEqual does a deep structural comparison.

const a = { id: 1 };
const b = { id: 1 };

expect(a).toBe(b);     // FAILS — different references
expect(a).toEqual(b);  // PASSES — same shape

expect(1).toBe(1);     // PASSES — primitives compared by value

For asserting on returned objects, almost always use toEqual or toMatchObject (partial match — like BeEquivalentTo with Excluding()).

5. Test isolation requires manual mock cleanup — it is not automatic

In .NET, each test class instantiation gives you fresh mock objects (because [Fact] creates a new class instance). In Vitest, mock state accumulates across tests unless you reset it. The three reset methods have different scopes:

vi.clearAllMocks();    // clears call history (.mock.calls, .mock.results)
                       // does NOT reset implementations set with mockReturnValue()

vi.resetAllMocks();    // clears call history AND removes implementations
                       // mocks return undefined after this

vi.restoreAllMocks();  // only applies to vi.spyOn() mocks
                       // restores original implementation

Convention: put vi.clearAllMocks() in afterEach in your global setup file (vitest.config.tssetupFiles), and only call vi.resetAllMocks() or vi.restoreAllMocks() when you explicitly need to change implementation mid-suite.


Hands-On Exercise

You have a PricingService that calculates order totals. It depends on a ProductRepository to fetch product prices and a DiscountService to apply promotions.

  1. Write the PricingService in src/pricing/pricing.service.ts with a calculateTotal(items: OrderItem[]): Promise<number> method that:

    • Fetches each product’s price via ProductRepository.getPrice(productId: number)
    • Applies discounts via DiscountService.getDiscount(userId: number): Promise<number> (returns a percentage 0-100)
    • Returns the discounted total
    • Throws ValidationError if any items array is empty
  2. Write the tests in src/pricing/pricing.service.test.ts covering:

    • Happy path: correct total with discount applied
    • Zero discount: total equals sum of prices
    • Empty items array: throws ValidationError
    • Parameterized: 0%, 10%, 50%, 100% discount all produce correct totals
    • Repository or discount service failure: error propagates correctly
  3. Run with pnpm vitest pricing.service (Vitest matches by filename fragment)

  4. Add coverage reporting and verify the service hits 90%+ line coverage

  5. Bonus: add one snapshot test for a fixed input that captures the exact output shape


Quick Reference

TaskCommand / API
Run all testspnpm vitest run
Watch modepnpm vitest
Run specific filepnpm vitest order.service
Run with coveragepnpm vitest run --coverage
Update snapshotspnpm vitest run --update-snapshots
Run E2E testspnpm playwright test
Create mock functionvi.fn()
Stub return value (sync)mockFn.mockReturnValue(value)
Stub return value (async)mockFn.mockResolvedValue(value)
Stub rejection (async)mockFn.mockRejectedValue(new Error())
Spy on existing methodvi.spyOn(obj, 'method')
Mock a modulevi.mock('./path/to/module')
Check call countexpect(mockFn).toHaveBeenCalledTimes(n)
Check call argumentsexpect(mockFn).toHaveBeenCalledWith(arg1, arg2)
Assert never calledexpect(mockFn).not.toHaveBeenCalled()
Deep equalityexpect(actual).toEqual(expected)
Partial matchexpect(actual).toMatchObject({ id: 1 })
Async throwawait expect(fn()).rejects.toThrow(ErrorClass)
Snapshotexpect(value).toMatchSnapshot()
Reset call historyvi.clearAllMocks()
Reset implementationsvi.resetAllMocks()
Restore spiesvi.restoreAllMocks()

.NET → Vitest concept map

.NETVitest
[Fact]it("description", () => {})
[Theory] + [InlineData]it.each([[...], [...]])("desc %s", ...)
[Skip("reason")]it.skip("description", ...)
Assert.Equal(expected, actual)expect(actual).toBe(expected)
Assert.Equivalent(expected, actual)expect(actual).toEqual(expected)
Assert.Contains(item, list)expect(list).toContain(item)
Assert.ThrowsAsync<T>()await expect(fn()).rejects.toThrow(T)
Assert.True(condition)expect(condition).toBe(true)
Mock<T>().Setup(...)vi.fn().mockReturnValue(...)
mock.Verify(...)expect(mock).toHaveBeenCalledWith(...)
Times.Once.toHaveBeenCalledTimes(1)
Times.Never.not.toHaveBeenCalled()
IClassFixture<T>beforeAll() + afterAll()
ConstructorbeforeEach()
IDisposable.Dispose()afterEach()
WebApplicationFactory<T>Test.createTestingModule() (NestJS)

Further Reading

TypeScript for C# Engineers: The Type System Compared

For .NET engineers who know: C# generics, interfaces, nullable reference types, and the nominal type system You’ll learn: How TypeScript’s structural type system differs from C#’s nominal one, and how to apply the discipline required to make it safe Time: 15-20 min read


The .NET Way (What You Already Know)

C#’s type system is nominal. The type’s name is its identity. If you define two classes with identical properties, they are not interchangeable — the compiler enforces the distinction.

public class Dog
{
    public string Name { get; set; }
    public int Age { get; set; }
}

public class Cat
{
    public string Name { get; set; }
    public int Age { get; set; }
}

// This does not compile. Dog and Cat are different types,
// regardless of their identical shape.
Dog myPet = new Cat { Name = "Whiskers", Age = 3 }; // CS0029

This is the guarantee that makes C# refactoring reliable. When you rename a type, the compiler finds every violation. The type system tracks what something is, not merely what it looks like.

Nullable reference types (introduced in C# 8.0, enforced by default from C# 10.0 with <Nullable>enable</Nullable>) extend this guarantee: the compiler can prove at compile time whether a variable can be null, and it forces you to handle that case explicitly.

string name = null;           // CS8600: Converting null literal to non-nullable type
string? nullable = null;      // Fine — you've declared the intent
int length = nullable.Length; // CS8602: Dereference of possibly null reference
int safeLength = nullable?.Length ?? 0; // Correct

Keep this mental model in mind. TypeScript’s type system solves the same problems but makes different trade-offs, and several of them will surprise you.


The TypeScript Way

Structural Typing: Shape Over Name

TypeScript’s type system is structural. Compatibility is determined by shape, not by the type’s declared name.

// TypeScript
interface Dog {
  name: string;
  age: number;
}

interface Cat {
  name: string;
  age: number;
}

// This is valid TypeScript. Dog and Cat have the same shape.
const myDog: Dog = { name: "Rex", age: 4 };
const myCat: Cat = myDog; // No error. Shape matches.

Side by side:

ScenarioC#TypeScript
Two types with same propertiesIncompatible (nominal)Compatible (structural)
A subclass passed as base typeCompatible (inheritance)Compatible if shape is a superset
A plain object literal typed as an interfaceRequires new ClassName()Object literal is directly assignable
Type identity checked at runtimeYes (is, typeof, GetType())No — types are erased at runtime

Structural typing is powerful — it lets you model duck-typed JS naturally. But it means the compiler will accept assignments you might not intend. Discipline fills that gap.


Primitive Types: The Mapping

C# has a rich set of numeric types. TypeScript has one: number. This is JavaScript’s IEEE 754 double-precision float underneath.

C# typeTypeScript equivalentNotes
stringstringSame semantics
boolbooleanDifferent name
int, long, shortnumberAll the same at runtime
double, floatnumberNo distinction
decimalnumberLoss of precision — see Gotchas
charstring (length 1)No dedicated char type
bytenumberNo dedicated byte type
objectobject or unknownPrefer unknown — see below
voidvoidSame concept
nullnullExplicit null literal
GuidstringConvention: UUID strings
DateTime, DateTimeOffsetstring or DateSee Gotchas

The number collapse is the biggest practical difference. If your C# code distinguishes between int counts and decimal currency amounts, TypeScript won’t enforce that for you. You’ll need either branded types (see Article 2.6) or Zod schemas (see Article 2.3) to enforce the distinction at runtime.

// TypeScript: all of these are `number`
const count: number = 42;
const price: number = 9.99;
const ratio: number = 0.001;

// C# equivalent:
// int count = 42;
// decimal price = 9.99m;
// double ratio = 0.001;

interface vs. type: The Opinionated Answer

TypeScript has two constructs for defining shapes: interface and type. This causes more unnecessary debate than it deserves. Here is the practical guidance:

Use interface for:

  • Object shapes (API responses, DTOs, component props, domain models)
  • Anything that other interfaces or classes might extend or implement

Use type for:

  • Union types (type Status = 'active' | 'inactive')
  • Intersection types (type AdminUser = User & Admin)
  • Mapped and conditional types
  • Aliases for primitives or tuples

The most important distinction: interfaces are open (they can be re-declared and merged across modules). Types are closed (one declaration, no merging). In practice, this means library authors prefer interfaces because consumers can extend them; application code can use either.

// Interface — open, extensible, preferred for object shapes
interface User {
  id: string;
  email: string;
}

// This declaration merges with the one above — works with interface, fails with type
interface User {
  displayName: string;
}
// Result: User now has id, email, and displayName

// Type alias — closed, preferred for unions and computed shapes
type Status = 'pending' | 'active' | 'suspended';
type NullableString = string | null;
type UserOrAdmin = User | AdminUser;

// Extending an interface (like C# interface inheritance)
interface AdminUser extends User {
  role: 'admin';
  permissions: string[];
}

C# comparison:

// C# interface — also used for object shapes and contracts
public interface IUser
{
    string Id { get; }
    string Email { get; }
}

// C# doesn't have union types — you'd use inheritance, discriminated unions,
// or a OneOf library to approximate this.

Classes in TypeScript vs. C#

TypeScript classes are syntactic sugar over JavaScript’s prototype chain. They look like C# classes but behave differently in important ways.

// TypeScript
class UserService {
  private readonly baseUrl: string;

  constructor(baseUrl: string) {
    this.baseUrl = baseUrl;
  }

  // Shorthand: parameter properties (no C# equivalent — saves the assignment)
  // constructor(private readonly baseUrl: string) {}

  async getUser(id: string): Promise<User> {
    const response = await fetch(`${this.baseUrl}/users/${id}`);
    return response.json() as User;
  }
}
// C#
public class UserService
{
    private readonly string _baseUrl;

    public UserService(string baseUrl)
    {
        _baseUrl = baseUrl;
    }

    public async Task<User> GetUser(string id)
    {
        using var client = new HttpClient();
        var response = await client.GetFromJsonAsync<User>($"{_baseUrl}/users/{id}");
        return response!;
    }
}

Access modifiers in TypeScript:

C# modifierTypeScript equivalentNotes
publicpublic (default)Default in TS; also default in C# for interface members
privateprivateTypeScript private is compile-time only — see Gotchas
protectedprotectedSame semantics
internalNo equivalentNo module-level visibility like C# internal
private protectedNo equivalent
readonlyreadonlySame semantics for properties

TypeScript also has # (ES private fields), which are enforced at runtime:

class Counter {
  #count = 0;           // True private — inaccessible at runtime too
  private legacy = 0;   // Compile-time only — accessible at runtime via any

  increment() {
    this.#count++;
    this.legacy++;
  }
}

const c = new Counter();
// (c as any).legacy     // Works at runtime — TypeScript's `private` is erased
// (c as any)['#count']  // Does NOT work — # fields are truly private

Parameter properties are a TypeScript shorthand worth knowing. Instead of declaring a field, declaring a constructor parameter, and assigning one to the other (the C# way), TypeScript lets you do it in one place:

class ProductService {
  // This single line: declares the field AND assigns it from the constructor parameter
  constructor(
    private readonly db: Database,
    private readonly cache: CacheService
  ) {}
}

Generics: Familiar Territory

Generics in TypeScript map closely to C# generics in syntax and intent. The differences are mostly about what constraints you can express.

// TypeScript generic function
function first<T>(items: T[]): T | undefined {
  return items[0];
}

// Generic interface
interface Repository<T> {
  findById(id: string): Promise<T | null>;
  save(entity: T): Promise<T>;
  delete(id: string): Promise<void>;
}

// Generic with constraint
function getProperty<T, K extends keyof T>(obj: T, key: K): T[K] {
  return obj[key];
}

// Constraint: T must have an `id` property
interface HasId {
  id: string;
}

function findById<T extends HasId>(items: T[], id: string): T | undefined {
  return items.find(item => item.id === id);
}
// C# generics for comparison
T? First<T>(IEnumerable<T> items) => items.FirstOrDefault();

interface IRepository<T>
{
    Task<T?> FindById(string id);
    Task<T> Save(T entity);
    Task Delete(string id);
}

// Constraint: T must implement IHasId
T? FindById<T>(IEnumerable<T> items, string id) where T : IHasId
{
    return items.FirstOrDefault(x => x.Id == id);
}

Where TypeScript generics diverge from C#:

  • TypeScript uses extends for constraints (not where)
  • keyof T is a TypeScript-only concept — it produces a union of string literal types representing the property names of T
  • TypeScript’s generics are purely structural: T extends HasId means “T’s shape must include the id: string property,” not “T must be declared as implementing HasId”
  • No runtime generic type information (unlike C# where typeof(T) and reflection work at runtime)

Enums: The Trap

TypeScript has enums, and they look familiar to C# engineers. Resist the temptation to use them freely. TypeScript enums have several design problems.

The problem with numeric enums:

enum Direction {
  North,  // 0
  South,  // 1
  East,   // 2
  West    // 3
}

// These are all valid TypeScript — no error:
const d: Direction = 100;  // Any number is assignable to a numeric enum
const d2: Direction = 0 | 1; // Bitwise ops produce numbers, all accepted

The problem with string enums:

enum Status {
  Active = 'ACTIVE',
  Inactive = 'INACTIVE',
}

// String enums are nominal — they don't accept plain strings:
const s: Status = 'ACTIVE'; // Error: Type '"ACTIVE"' is not assignable to type 'Status'
// You're forced to use Status.Active everywhere — adds friction, no real safety

The compiled output problem:

TypeScript is supposed to be a type layer over JavaScript — types are erased at compile time. Enums break this rule: they compile to a JavaScript object (an IIFE), which means your runtime bundle includes enum code even though TypeScript types don’t exist at runtime. const enum avoids this by inlining values, but it has its own issues with module boundaries.

What to use instead:

// Option 1: const object + typeof — the recommended pattern
const Direction = {
  North: 'north',
  South: 'south',
  East: 'east',
  West: 'west',
} as const;

type Direction = typeof Direction[keyof typeof Direction];
// Direction is now: 'north' | 'south' | 'east' | 'west'

function move(dir: Direction) { /* ... */ }
move('north');           // Works
move(Direction.North);   // Works
move('diagonal');        // Error — not in the union

// Option 2: string union literal types — the simplest approach
type Status = 'active' | 'inactive' | 'suspended';

// Option 3: When you need an enum-like object with iteration capability
const HttpStatus = {
  Ok: 200,
  Created: 201,
  BadRequest: 400,
  NotFound: 404,
  InternalServerError: 500,
} as const;

type HttpStatus = typeof HttpStatus[keyof typeof HttpStatus];

The as const assertion is the key: it tells TypeScript to infer the narrowest possible types (literal types like 'north', not string) and makes all properties readonly.


null and undefined: Two Nothings

C# has one null. TypeScript has two: null and undefined. They are distinct types with distinct semantics.

  • null — an intentional absence of a value. Explicitly assigned.
  • undefined — an uninitialized value. The default state in JavaScript when a variable is declared but not assigned, when an object property doesn’t exist, or when a function returns without a value.
// Both are distinct
let a: string | null = null;       // Intentional null
let b: string | undefined;         // Uninitialized — value is `undefined`
let c: string | null | undefined;  // Could be either

// Optional properties use undefined, not null
interface Config {
  baseUrl: string;
  timeout?: number;    // Same as: timeout: number | undefined
  apiKey?: string;
}

// Optional function parameters
function connect(url: string, timeout?: number): void {
  const t = timeout ?? 5000; // Nullish coalescing — same as C# ??
}

connect('https://api.example.com');         // timeout is undefined
connect('https://api.example.com', 3000);   // timeout is 3000

C# comparison:

// C# — one null to rule them all
string? name = null;
int? timeout = null;

// C# doesn't distinguish "intentional null" from "not provided"
// Optional parameters use default values instead
void Connect(string url, int timeout = 5000) { }

The strict null checks flag (strictNullChecks) is what makes TypeScript’s type system safe for nullability. With it enabled, null and undefined are not assignable to other types unless you explicitly declare them. Without it, TypeScript’s nullability is essentially C# without nullable reference types — a false sense of security.

Always enable strictNullChecks. It’s included in strict: true (more on that below).

Non-null assertion operator:

// The ! operator — tells TypeScript "I know this isn't null"
// Equivalent to C#'s null-forgiving operator (!) from C# 8.0
const input = document.getElementById('email')!; // Tell TS: this exists
const value = input.value; // No null error

// Use sparingly — it's lying to the type system if you're wrong

any, unknown, and never

These three types have no clean C# equivalents and are worth understanding precisely.

any — the type system opt-out:

let x: any = 'hello';
x = 42;           // Fine
x = true;         // Fine
x.nonExistent();  // No error — you've turned off type checking for x
const y: string = x; // Fine — any is assignable to anything

// Avoid any. It's contagious — it spreads through your codebase.
// The only legitimate uses: interfacing with untyped JS libraries
// before types are available, or rapid prototyping.

unknown — the safe alternative to any:

let x: unknown = getExternalData();

// Unlike any, you can't use unknown without narrowing first:
x.toUpperCase();         // Error: Object is of type 'unknown'
const y: string = x;    // Error: Type 'unknown' not assignable to 'string'

// You must narrow it first:
if (typeof x === 'string') {
  x.toUpperCase(); // Fine inside the narrowed block
}

// Or use a type assertion (risky — you're responsible):
const s = x as string;

unknown is what you should use when you genuinely don’t know the type upfront — when parsing JSON from an external source, for example. It forces you to verify before using, rather than silently propagating an unchecked assumption.

never — the bottom type:

never is the type for values that can never exist. It’s the TypeScript equivalent of void in C# for functions that throw unconditionally, but it’s also used in exhaustiveness checking.

// A function that never returns (always throws or loops forever)
function fail(message: string): never {
  throw new Error(message);
}

// Exhaustiveness checking — the most useful application
type Shape = 'circle' | 'square' | 'triangle';

function area(shape: Shape): number {
  switch (shape) {
    case 'circle': return Math.PI * 5 * 5;
    case 'square': return 25;
    case 'triangle': return 12.5;
    default:
      // If you add a new Shape variant and forget to handle it,
      // the compiler will flag this assignment — shape would be `never`
      // only if all cases are exhausted
      const _exhaustive: never = shape;
      throw new Error(`Unhandled shape: ${shape}`);
  }
}

This exhaustiveness pattern is the TypeScript equivalent of C#’s pattern matching exhaustiveness in switch expressions.


The strict Compiler Flag Family

The TypeScript compiler has a collection of strictness flags. The strict: true setting in tsconfig.json enables all of them. You should always start with strict: true and work from there. Disabling individual flags is an escape hatch for migrating legacy code, not a permanent configuration.

{
  "compilerOptions": {
    "strict": true
  }
}

strict: true enables:

FlagWhat it enforces
strictNullChecksnull and undefined must be explicitly declared
strictFunctionTypesStricter function parameter type checking
strictBindCallApplyTyped bind, call, and apply
strictPropertyInitializationClass properties must be initialized in the constructor
noImplicitAnyVariables without inferred types must have explicit types
noImplicitThisthis must have an explicit type in function bodies
alwaysStrictEmits 'use strict' in all compiled files

Additional flags worth considering beyond strict:

{
  "compilerOptions": {
    "strict": true,
    "noUncheckedIndexedAccess": true,  // array[i] returns T | undefined, not T
    "exactOptionalPropertyTypes": true, // undefined must be explicit in optional props
    "noImplicitReturns": true,          // All code paths must return a value
    "noFallthroughCasesInSwitch": true  // No accidental switch case fallthrough
  }
}

noUncheckedIndexedAccess is particularly valuable for .NET engineers: it makes array indexing honest. In C#, items[0] throws at runtime if the array is empty. In TypeScript without this flag, items[0] has type T even though it might be undefined. With this flag, items[0] has type T | undefined, and you’re forced to check.


Key Differences

ConceptC#TypeScript
Type systemNominal (name-based identity)Structural (shape-based identity)
Numeric typesint, long, decimal, double, floatnumber (all are the same)
NullabilityNullable<T> / T? with NRTsT | null, T | undefined, or T? on optional properties
Two kinds of nullNoYes — null and undefined are distinct
EnumsSafe, nominal, no spurious assignabilityProblematic — prefer const objects or union types
Generic constraintswhere T : IInterface, new()T extends Shape (structural)
Type information at runtimeYes (reflection)No — types are erased at compilation
Type safety opt-outdynamicany
Type safety opt-in for unknownsobjectunknown
ExhaustivenessRoslyn warning in switch expressionsnever + explicit checks
Access modifiersNominal enforcement at runtime (IL)private is compile-time only; # is runtime
Classes vs. interfacesInterfaces are pure abstractionsInterfaces are compile-time only; both erased

Gotchas for .NET Engineers

1. private is not private at runtime

TypeScript’s private keyword is enforced only by the compiler. At runtime, after compilation to JavaScript, all properties are accessible. This matters for security, serialization, and interop.

class ApiClient {
  private apiKey: string;

  constructor(apiKey: string) {
    this.apiKey = apiKey;
  }
}

const client = new ApiClient('secret-key-123');
console.log((client as any).apiKey); // Prints: secret-key-123

// If you need actual runtime privacy, use # (ES private fields):
class SafeClient {
  #apiKey: string;

  constructor(apiKey: string) {
    this.#apiKey = apiKey;
  }
}

const safe = new SafeClient('secret-key-123');
console.log((safe as any)['#apiKey']); // undefined — truly inaccessible

The implication: never use TypeScript’s private to hide sensitive data. It’s a development-time contract, not a security mechanism.

2. number cannot represent decimal safely

C# decimal is a 128-bit type designed for financial arithmetic. TypeScript’s number is a 64-bit IEEE 754 float — the same type C# uses for double. Financial calculations in TypeScript are dangerous without an arbitrary precision library.

// This produces the wrong result — classic floating point issue
console.log(0.1 + 0.2); // 0.30000000000000004

// C# decimal avoids this:
// decimal a = 0.1m; decimal b = 0.2m; // 0.1 + 0.2 = 0.3 exactly

// For financial values in TypeScript, use a library:
// - dinero.js (recommended for currency)
// - decimal.js or big.js (arbitrary precision)
import Dinero from 'dinero.js';
const price = Dinero({ amount: 1099, currency: 'USD' }); // $10.99 in cents
const tax = price.percentage(8);
const total = price.add(tax);

If your .NET codebase uses decimal anywhere that matters financially, plan for this when bringing that logic to TypeScript.

3. Type assertions (as) are not casts — they are lies the compiler accepts

In C#, a cast throws InvalidCastException at runtime if the types are incompatible. TypeScript’s as operator tells the compiler to trust you — it performs no runtime check whatsoever.

// This compiles fine and produces a corrupt object silently:
const user = JSON.parse(apiResponse) as User;
// user looks like User to TypeScript — but if the API returns unexpected data,
// you won't find out until runtime when you access user.email and it's undefined

// C# equivalent would throw at deserialization:
// var user = JsonSerializer.Deserialize<User>(apiResponse);
// Missing required fields throw an exception

// The correct TypeScript approach: validate at the boundary with Zod
import { z } from 'zod';

const UserSchema = z.object({
  id: z.string(),
  email: z.string().email(),
  displayName: z.string(),
});

// This throws a descriptive error if the shape doesn't match:
const user = UserSchema.parse(JSON.parse(apiResponse));

See Article 2.3 for the full runtime validation story with Zod.

4. Structural typing allows unintended assignments across domain types

This is the most significant discipline gap relative to C#. In C#, you can’t pass an OrderId where a UserId is expected even if both are Guid. In TypeScript with structural typing, you can pass a User where an Order is expected if their shapes happen to match.

interface UserId {
  value: string;
}

interface OrderId {
  value: string;
}

// Identical shapes — TypeScript considers them compatible:
const userId: UserId = { value: 'user-123' };
const orderId: OrderId = userId; // No error. This is a bug in the making.

function deleteOrder(id: OrderId): void { /* ... */ }
deleteOrder(userId); // TypeScript accepts this

The solution is branded types, covered in Article 2.6:

type UserId = string & { readonly _brand: 'UserId' };
type OrderId = string & { readonly _brand: 'OrderId' };

function createUserId(value: string): UserId {
  return value as UserId;
}

function deleteOrder(id: OrderId): void { /* ... */ }
const userId = createUserId('user-123');
deleteOrder(userId); // Now this correctly errors

5. TypeScript enums produce JavaScript output — and numeric enums have spurious assignability

As covered above, numeric enums accept any number. This is a known design issue. The TypeScript team has acknowledged it. The practical solution is to avoid numeric enums entirely and use the const object pattern.

// Broken — any number is assignable
enum Priority { Low = 1, Medium = 2, High = 3 }
const p: Priority = 9999; // No error

// Correct — only the declared values are valid
const Priority = { Low: 1, Medium: 2, High: 3 } as const;
type Priority = typeof Priority[keyof typeof Priority]; // 1 | 2 | 3
const p: Priority = 9999; // Error: Type '9999' is not assignable to type '1 | 2 | 3'

6. Date serialization between TypeScript and other backends

TypeScript’s Date object is a thin wrapper around a Unix timestamp. When you serialize it to JSON (JSON.stringify), it produces an ISO 8601 string. When you deserialize JSON, JSON.parse does NOT automatically convert ISO strings back to Date objects — they remain strings.

const event = { name: 'Launch', date: new Date('2026-03-01') };
const json = JSON.stringify(event);
// '{"name":"Launch","date":"2026-03-01T00:00:00.000Z"}'

const parsed = JSON.parse(json);
parsed.date instanceof Date; // false — it's a string
typeof parsed.date;          // 'string'

// You must explicitly convert, or use Zod's z.coerce.date():
import { z } from 'zod';
const EventSchema = z.object({
  name: z.string(),
  date: z.coerce.date(), // Converts ISO string to Date
});

This is especially relevant when consuming .NET or Python APIs, where date serialization formats can differ. Map dates at the boundary, not deep in your application code.


Hands-On Exercise

You have a C# domain model for an invoice system. Your task is to translate it to TypeScript using the patterns from this article.

The C# source:

public enum InvoiceStatus
{
    Draft,
    Sent,
    Paid,
    Overdue,
    Cancelled
}

public class LineItem
{
    public Guid Id { get; init; }
    public string Description { get; init; }
    public decimal UnitPrice { get; init; }
    public int Quantity { get; init; }
    public decimal Total => UnitPrice * Quantity;
}

public class Invoice
{
    public Guid Id { get; init; }
    public string InvoiceNumber { get; init; }
    public Guid CustomerId { get; init; }
    public InvoiceStatus Status { get; init; }
    public DateTimeOffset IssuedAt { get; init; }
    public DateTimeOffset? DueDate { get; init; }
    public IReadOnlyList<LineItem> LineItems { get; init; }
    public decimal? Notes { get; init; }
}

public interface IInvoiceRepository
{
    Task<Invoice?> GetById(Guid id);
    Task<IReadOnlyList<Invoice>> GetByCustomer(Guid customerId, InvoiceStatus? status = null);
    Task<Invoice> Save(Invoice invoice);
}

Your task:

  1. Convert InvoiceStatus to a const object + union type (not a TypeScript enum).
  2. Define LineItem and Invoice as TypeScript interfaces. Use readonly properties throughout.
  3. Add a computed total property to LineItem that TypeScript can express structurally.
  4. Decide: where does decimal precision matter here, and how would you note the risk in your types?
  5. Convert IInvoiceRepository to a TypeScript interface with correct return types (use Promise<T>, not Task<T>).
  6. Add proper null vs. undefined semantics — where is something intentionally absent vs. not yet provided?
  7. Enable strict: true in a minimal tsconfig.json and verify your types compile.

Stretch goal: Add a branded type for InvoiceId and CustomerId so they cannot be accidentally swapped.


Quick Reference

C#TypeScriptNotes
int, long, double, floatnumberAll map to the same runtime type
decimalnumber (risky)Use dinero.js or decimal.js for money
stringstringIdentical semantics
boolbooleanDifferent name
charstringNo char type in TS
GuidstringConvention only — no UUID type
DateTimestring (ISO) or DateSerialize/deserialize manually or via Zod
T? / Nullable<T>T | nullExplicit nullability
Optional parameterparam?: T (= T | undefined)undefined, not null
objectunknownUse unknown, not any
dynamicanyBoth disable type checking; both dangerous
void (throws)neverBottom type — function never returns
interface IFoointerface FooTS omits the I prefix convention
public class Fooclass FooAccess modifiers work similarly
private field#field for real privacyprivate keyword is compile-time only
readonly propertyreadonly propertySame semantics
where T : IShapeT extends ShapeStructural constraint, not nominal
typeof(T)Not availableTypes are erased at runtime
is type checktypeof, instanceof, type guardsUser-defined type guards for complex checks
enum Direction { North }const Direction = { North: 'north' } as constAvoid TS enums — use const objects
Nullable<T> + NRTs enabledstrict: true in tsconfigBoth require opt-in; both are mandatory
JsonSerializer.Deserialize<T>()Schema.parse(JSON.parse(s))Zod validates; as T does not

tsconfig.json minimum for a new project:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "strict": true,
    "noUncheckedIndexedAccess": true,
    "noImplicitReturns": true,
    "noFallthroughCasesInSwitch": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "outDir": "./dist"
  }
}

Further Reading

Advanced TypeScript Types: Things C# Can’t Do

For .NET engineers who know: C# generics, interfaces, abstract classes, LINQ, pattern matching You’ll learn: TypeScript’s advanced type system features — union types, conditional types, mapped types, discriminated unions, and more — with honest assessments of when to use them and when they are overkill Time: 25-30 min read


The .NET Way (What You Already Know)

C#’s type system is rich but nominally typed: a Dog is a Dog because it was declared class Dog, not because it happens to have a Bark() method. Generics give you parameterized types (List<T>, Task<T>, IRepository<T>), interfaces define contracts, and abstract classes share implementation. You use where T : class and where T : IEntity to constrain generic parameters.

The C# type system is deliberately conservative. It prioritizes correctness and expressibility for object-oriented patterns. What it does not offer natively: the ability to express “this value is one of these specific string literals,” the ability to derive a new type from an existing one by picking a subset of its properties, or the ability to write a type that resolves differently depending on what type you pass in.

For most enterprise .NET code, these constraints are invisible. The pattern matching added in C# 7-9 addressed some gaps. But there are entire categories of type-level computation that C# cannot do at all.

TypeScript’s type system can. And in real-world TypeScript projects, these features appear constantly — in library code you will consume, in the patterns your team already uses, and in the errors you will debug when you get them wrong.

This article covers the advanced features. Approach them as a toolkit, not a showcase. Each one solves a specific problem. Each one has a failure mode where it adds complexity with no real benefit.


The TypeScript Way

Union Types: “This or That”

A union type says a value can be one of several types:

// TypeScript
type StringOrNumber = string | number;

function format(value: StringOrNumber): string {
  return String(value);
}

format("hello"); // ok
format(42);      // ok
format(true);    // error: Argument of type 'boolean' is not assignable

Closest C# analogy: None that’s clean. You might reach for a common base class, an object parameter, or an overloaded method. The closest structural approximation is a discriminated union via a sealed class hierarchy — but that requires multiple class declarations and a runtime dispatch pattern. TypeScript’s union type is a first-class, zero-runtime-cost construct.

Practical use case: API responses often have different shapes depending on success or failure.

// Without union types — you'd need a class hierarchy or object with optional fields
type ApiResponse<T> =
  | { status: "success"; data: T }
  | { status: "error"; code: number; message: string };

function handleResponse(response: ApiResponse<User>): void {
  if (response.status === "success") {
    console.log(response.data.name); // TypeScript knows .data exists here
  } else {
    console.log(response.message);   // TypeScript knows .message exists here
  }
}

When not to use it: If your “union” grows to five or more members and you are constantly checking the discriminant, you may actually want a class hierarchy or a dedicated state machine. Union types with many arms are hard to extend — every check site needs updating.


Intersection Types: “This and That”

Where union types are “or,” intersection types are “and”:

// TypeScript
type Named = { name: string };
type Aged = { age: number };
type Person = Named & Aged;

const person: Person = { name: "Chris", age: 40 }; // must satisfy both

Closest C# analogy: Implementing multiple interfaces. class Person : INamed, IAged. The difference: intersection types work on object literal shapes without class declarations. You compose types structurally, not nominally.

Practical use case: Merging types in utility functions — for example, a generic “with ID” wrapper:

type WithId<T> = T & { id: string };

type UserDto = { name: string; email: string };
type UserRecord = WithId<UserDto>;
// { name: string; email: string; id: string }

function save<T>(entity: T): WithId<T> {
  return { ...entity, id: crypto.randomUUID() };
}

When not to use it: Intersecting two types that have conflicting properties for the same key produces never for that property, which silently breaks things:

type A = { id: string };
type B = { id: number };
type Broken = A & B; // { id: never } — no value can satisfy this

Intersections work best when the merged types have non-overlapping properties.


Literal Types: Exact Values as Types

In C#, a string can hold any string. In TypeScript, you can create a type that only accepts specific string (or number, or boolean) values:

// TypeScript
type Direction = "north" | "south" | "east" | "west";
type HttpMethod = "GET" | "POST" | "PUT" | "DELETE" | "PATCH";
type Port = 80 | 443 | 8080;

function move(direction: Direction): void {
  // direction is guaranteed to be one of the four values
}

move("north");  // ok
move("up");     // error: Argument of type '"up"' is not assignable

Closest C# analogy: enum, but better in two ways. First, string literal types carry their value as their identity — you do not need to convert between enum and string for serialization. Second, literal types participate in the full type system and can be used anywhere a type can appear.

C# comparison:

// C# — you need an enum and then string conversion for JSON, API contracts, etc.
public enum Direction { North, South, East, West }

// TypeScript — the string literal IS the value
type Direction = "north" | "south" | "east" | "west";
const d: Direction = "north"; // serializes directly

Practical use case: Strongly typed event names, CSS property values, configuration keys — anywhere you have a fixed, small set of valid string values.

When not to use it: If the set of valid values is dynamic, comes from a database, or needs to be extended without code changes. In those cases, a validated string at runtime (via Zod — see Article 2.3) is the right tool, not a compile-time literal type.


Template Literal Types: Type-Level String Interpolation

TypeScript can construct string literal types by combining other string literals — at the type level, not at runtime:

// TypeScript
type EventName = "click" | "focus" | "blur";
type HandlerName = `on${Capitalize<EventName>}`;
// "onClick" | "onFocus" | "onBlur"

type CSSProperty = "margin" | "padding";
type CSSDirection = "Top" | "Bottom" | "Left" | "Right";
type CSSFullProperty = `${CSSProperty}${CSSDirection}`;
// "marginTop" | "marginBottom" | ... | "paddingRight" (8 combinations)

C# analogy: None. C# has no mechanism to compute string types at compile time.

Practical use case: Typed event maps, typed route parameters, typed CSS-in-TS:

// Typed route parameter extraction
type ExtractParams<Route extends string> =
  Route extends `${string}:${infer Param}/${infer Rest}`
    ? Param | ExtractParams<`/${Rest}`>
    : Route extends `${string}:${infer Param}`
    ? Param
    : never;

type UserRouteParams = ExtractParams<"/users/:userId/posts/:postId">;
// "userId" | "postId"

// Practical typed router:
function buildRoute<T extends string>(
  template: T,
  params: Record<ExtractParams<T>, string>
): string {
  let result: string = template;
  for (const [key, value] of Object.entries(params)) {
    result = result.replace(`:${key}`, value as string);
  }
  return result;
}

const url = buildRoute("/users/:userId/posts/:postId", {
  userId: "abc",
  postId: "123",
});
// TypeScript enforces that you provide exactly userId and postId

When not to use it: Template literal types get unwieldy fast. If the combinations produce more than a dozen string values, they become expensive for the compiler and hard to read in error messages. The router example above is near the limit of practical complexity.


Discriminated Unions: The Functional State Pattern

A discriminated union is a union of types that share a common “discriminant” field — a literal-typed property that uniquely identifies which member of the union you have. This is the TypeScript equivalent of a sealed class hierarchy in C#, but it composes more cleanly.

// TypeScript
type LoadingState =
  | { status: "idle" }
  | { status: "loading" }
  | { status: "success"; data: User[] }
  | { status: "error"; error: Error };
// The closest C# equivalent — much more ceremony
public abstract record LoadingState;
public record Idle : LoadingState;
public record Loading : LoadingState;
public record Success(IEnumerable<User> Data) : LoadingState;
public record Error(Exception Exception) : LoadingState;

// Pattern matching in C# 9+
var result = state switch {
    Success s => s.Data,
    Error e   => throw e.Exception,
    _         => throw new InvalidOperationException()
};

TypeScript’s version — with switch statements or conditional chains — is more compact, requires no base type, and the compiler narrows the type automatically based on the discriminant check:

// TypeScript narrowing via discriminated union
function render(state: LoadingState): string {
  switch (state.status) {
    case "idle":
      return "Waiting...";
    case "loading":
      return "Loading...";
    case "success":
      return state.data.map((u) => u.name).join(", "); // data is typed here
    case "error":
      return `Error: ${state.error.message}`;           // error is typed here
  }
}

The compiler also enforces exhaustiveness when you configure it correctly. Add a default branch that assigns to never:

function assertNever(value: never): never {
  throw new Error(`Unhandled discriminant: ${JSON.stringify(value)}`);
}

function render(state: LoadingState): string {
  switch (state.status) {
    case "idle":    return "Waiting...";
    case "loading": return "Loading...";
    case "success": return state.data.map((u) => u.name).join(", ");
    case "error":   return `Error: ${state.error.message}`;
    default:        return assertNever(state); // compiler error if a case is missing
  }
}

Practical use case: UI state machines, API response shapes, command/event types in CQRS-like patterns, WebSocket message types.

When not to use it: Discriminated unions do not support inheritance-like composition well. If you need to add behavior to each variant (methods, not just data), a class hierarchy remains more ergonomic.


Conditional Types: Type-Level If/Else

Conditional types apply a conditional at the type level:

// T extends U ? X : Y
type IsString<T> = T extends string ? true : false;

type A = IsString<string>;  // true
type B = IsString<number>;  // false
type C = IsString<"hello">; // true — "hello" extends string

On its own this looks academic. In combination with generics, it enables patterns that are genuinely useful:

// Extract the resolved type from a Promise
type Awaited<T> = T extends Promise<infer R> ? R : T;
// (This is now built into TypeScript, but understanding it matters)

type UserData = Awaited<Promise<User>>;  // User
type NumberData = Awaited<number>;       // number

// NonNullable removes null and undefined from a type
type NonNullable<T> = T extends null | undefined ? never : T;

type MaybeString = string | null | undefined;
type DefinitelyString = NonNullable<MaybeString>; // string

C# analogy: The closest C# mechanism is generic constraints (where T : class) combined with overloaded methods — but this enforces constraints rather than computing output types. C# has no way to say “if T is X, return Y, otherwise return Z” in the type system.

Practical use case — typed API client method overloads:

// Different return types based on whether pagination is requested
type PaginatedResponse<T> = { data: T[]; total: number; page: number };

type QueryResult<T, Paginated extends boolean> =
  Paginated extends true ? PaginatedResponse<T> : T[];

async function query<T, P extends boolean = false>(
  endpoint: string,
  paginate?: P
): Promise<QueryResult<T, P>> {
  // implementation
  throw new Error("not implemented");
}

const users = await query<User>("/users");           // User[]
const paged = await query<User>("/users", true);     // PaginatedResponse<User>

When not to use it: Conditional types can nest multiple levels deep and become essentially unreadable. If you find yourself writing T extends A ? (T extends B ? X : Y) : Z, step back and ask whether two separate functions or a union return type solves the problem more clearly. They are powerful; they are also the most common source of “I wrote this, and now I cannot debug it” type-system problems.


Mapped Types: Transform Every Key in a Type

Mapped types iterate over the keys of a type and produce a new type by applying a transformation:

// The syntax: { [K in keyof T]: transformation }
type Readonly<T> = { readonly [K in keyof T]: T[K] };
type Partial<T> = { [K in keyof T]?: T[K] };
type Required<T> = { [K in keyof T]-?: T[K] }; // -? removes optional
type Nullable<T> = { [K in keyof T]: T[K] | null };

These are all built into TypeScript, but understanding that they are derived — not primitive — changes how you think about the type system.

The built-in utility types you will use constantly:

type User = {
  id: string;
  name: string;
  email: string;
  role: "admin" | "user";
  createdAt: Date;
};

// Partial<T> — all fields optional (PATCH request body)
type UpdateUserDto = Partial<User>;
// { id?: string; name?: string; email?: string; ... }

// Pick<T, K> — only keep specified keys
type UserSummary = Pick<User, "id" | "name">;
// { id: string; name: string }

// Omit<T, K> — remove specified keys
type CreateUserDto = Omit<User, "id" | "createdAt">;
// { name: string; email: string; role: "admin" | "user" }

// Record<K, V> — typed dictionary
type UsersByRole = Record<"admin" | "user", User[]>;
// { admin: User[]; user: User[] }

// Required<T> — remove all optionals
type FullUser = Required<Partial<User>>;
// Same as User — all fields mandatory again

C# comparison:

// C# — requires separate class declarations for each shape
public class User { public string Id; public string Name; public string Email; }
public class UpdateUserDto { public string? Name; public string? Email; }       // manual
public class CreateUserDto { public string Name; public string Email; }         // manual
public class UserSummary { public string Id; public string Name; }              // manual

// TypeScript — derived from User automatically
type UpdateUserDto = Partial<Omit<User, "id" | "createdAt">>;
type CreateUserDto = Omit<User, "id" | "createdAt">;
type UserSummary = Pick<User, "id" | "name">;

If you change the User type in TypeScript, all three derived types update automatically. In C#, you update four separate classes manually.

Custom mapped types — a real example:

// Convert every method in a type to return a Promise version
type Promisify<T> = {
  [K in keyof T]: T[K] extends (...args: infer A) => infer R
    ? (...args: A) => Promise<R>
    : T[K];
};

interface SyncRepository {
  findById(id: string): User;
  save(user: User): void;
  delete(id: string): boolean;
}

type AsyncRepository = Promisify<SyncRepository>;
// {
//   findById(id: string): Promise<User>;
//   save(user: User): Promise<void>;
//   delete(id: string): Promise<boolean>;
// }

When not to use it: Do not create custom mapped types for one-off situations. If you only need a specific shape once, just declare that type explicitly. Mapped types earn their complexity when you need to derive several types from one source of truth, or when you are building library/utility code that will be reused across many types.


The infer Keyword: Extract Type Components

infer appears inside conditional types and lets you capture a part of a matched type into a named type variable:

// Extract the return type of a function
type ReturnType<T> = T extends (...args: any[]) => infer R ? R : never;

type GetUser = () => Promise<User>;
type UserResult = ReturnType<GetUser>; // Promise<User>

// Extract the element type of an array
type ElementType<T> = T extends (infer E)[] ? E : never;

type Names = ElementType<string[]>; // string
type Ids = ElementType<number[]>;   // number

// Extract the resolved value from a Promise
type UnwrapPromise<T> = T extends Promise<infer V> ? V : T;

type ResolvedUser = UnwrapPromise<Promise<User>>; // User
type PassThrough = UnwrapPromise<string>;          // string

C# analogy: Reflection-based type extraction at runtime — but TypeScript’s infer operates entirely at compile time with zero runtime cost.

Practical use case — extracting function parameter types for wrapper functions:

// NestJS interceptor pattern: wrap any async service method with retry logic
type AsyncFn = (...args: any[]) => Promise<any>;

type RetryWrapper<T extends AsyncFn> = (
  ...args: Parameters<T>
) => Promise<Awaited<ReturnType<T>>>;

function withRetry<T extends AsyncFn>(fn: T, maxAttempts = 3): RetryWrapper<T> {
  return async (...args) => {
    let lastError: unknown;
    for (let attempt = 0; attempt < maxAttempts; attempt++) {
      try {
        return await fn(...args);
      } catch (err) {
        lastError = err;
      }
    }
    throw lastError;
  };
}

async function fetchUser(id: string): Promise<User> {
  // implementation
  throw new Error("not implemented");
}

const robustFetchUser = withRetry(fetchUser); // (id: string) => Promise<User>
// Type fully preserved through the wrapper

When not to use it: infer is exclusively for advanced utility types and library-level code. Application code should rarely need it directly. If you find yourself reaching for infer in a controller or service, you are almost certainly over-engineering the solution.


The satisfies Operator: Validate Without Widening

Added in TypeScript 4.9, satisfies validates that a value conforms to a type without changing the inferred type of the value. This is the distinction between “I know this fits the type” and “I want TypeScript to infer the specific type but also confirm it fits.”

type ColorMap = Record<string, string | [number, number, number]>;

// Without satisfies — TypeScript widens to the annotation type
const colors: ColorMap = {
  red: [255, 0, 0],
  green: "#00ff00",
  blue: [0, 0, 255],
};

// colors.red has type `string | [number, number, number]`
// TypeScript loses the knowledge that red is specifically a tuple
colors.red.toUpperCase(); // error — TS thinks it might be a tuple

// With satisfies — validation without widening
const palette = {
  red: [255, 0, 0],
  green: "#00ff00",
  blue: [0, 0, 255],
} satisfies ColorMap;

// palette.red has type [number, number, number] — the specific inferred type
// palette.green has type string — specific inferred type
palette.red.map((c) => c * 2); // ok — TypeScript knows it's a tuple
palette.green.toUpperCase();   // ok — TypeScript knows it's a string

C# analogy: None direct. The closest concept is an implicit type conversion that validates the target interface without losing the concrete type, but C# does not work this way — you either declare the variable as the interface type (and lose access to concrete members) or the concrete type (and lose the validation).

Practical use case — NestJS configuration objects:

// NestJS module options often have broad types
type DbConfig = {
  host: string;
  port: number;
  ssl?: boolean;
  poolSize?: number;
};

// Using satisfies: TypeScript validates the config AND keeps the specific types
const dbConfig = {
  host: "localhost",
  port: 5432,
  ssl: false,
  poolSize: 10,
} satisfies DbConfig;

// Without satisfies, if you annotated as DbConfig:
// dbConfig.host would be `string` (fine, but...)
// dbConfig.port would be `number` (fine)
// But if you had a literal type context, satisfies preserves it

// More useful example with a route config
type RouteConfig = {
  method: "GET" | "POST" | "PUT" | "DELETE";
  path: string;
  requiresAuth: boolean;
};

const routes = {
  getUsers: { method: "GET", path: "/users", requiresAuth: true },
  createUser: { method: "POST", path: "/users", requiresAuth: true },
} satisfies Record<string, RouteConfig>;

// routes.getUsers.method is "GET" — not just "GET" | "POST" | "PUT" | "DELETE"
// TypeScript validates the shape but preserves the literal types

When not to use it: satisfies solves a specific problem: you want type validation without losing the specific inferred type. If you do not need access to the specific inferred type afterward (i.e., the widened annotation type is fine), a plain type annotation is simpler and clearer.


Type Narrowing and Type Guards: Recover Specificity from Broad Types

Type narrowing is how TypeScript automatically refines a broad type to a more specific one based on runtime checks. It is built into the language and powers discriminated unions, but it also works with typeof, instanceof, in, and truthiness checks:

function process(input: string | number | null): string {
  if (input === null) {
    return "empty";                      // narrowed to null
  }
  if (typeof input === "string") {
    return input.toUpperCase();          // narrowed to string
  }
  return input.toFixed(2);              // narrowed to number — TypeScript deduced this
}

User-defined type guards let you write a function that narrows types in ways the compiler cannot infer automatically. The return type value is SomeType is the key:

// TypeScript type guard
function isUser(value: unknown): value is User {
  return (
    typeof value === "object" &&
    value !== null &&
    "id" in value &&
    "name" in value &&
    typeof (value as any).id === "string" &&
    typeof (value as any).name === "string"
  );
}

function processApiResponse(raw: unknown): string {
  if (isUser(raw)) {
    return raw.name; // TypeScript now knows raw is User
  }
  return "unknown entity";
}

C# comparison:

// C# pattern matching narrowing
object input = GetValue();
if (input is string s) {
    Console.WriteLine(s.ToUpper()); // s is string here
}
if (input is User u) {
    Console.WriteLine(u.Name);      // u is User here
}

C# pattern matching is similar, but it relies on the CLR’s nominal type system. TypeScript’s structural narrowing is more flexible: you can narrow based on property shapes, not just declared types.

The in narrowing operator is particularly useful with discriminated unions or when working with object types:

type Cat = { meow: () => void };
type Dog = { bark: () => void };

function makeNoise(animal: Cat | Dog): void {
  if ("meow" in animal) {
    animal.meow(); // narrowed to Cat
  } else {
    animal.bark(); // narrowed to Dog
  }
}

Assertion functions are a stricter variant of type guards — they throw if the assertion fails rather than returning boolean:

function assertIsUser(value: unknown): asserts value is User {
  if (!isUser(value)) {
    throw new Error(`Expected User, got: ${JSON.stringify(value)}`);
  }
}

function processFromApi(raw: unknown): void {
  assertIsUser(raw); // throws if raw is not a User
  console.log(raw.name); // TypeScript knows raw is User here
}

When not to use custom type guards: If you are in code that already uses Zod for validation (see Article 2.3), Zod’s .parse() and .safeParse() are both a type guard and a runtime validator combined — writing a manual type guard duplicates work Zod already does. Reserve manual type guards for situations where Zod is not present or would be overkill.


Key Differences

FeatureC#TypeScript
Union typesNo equivalent (closest: sealed hierarchy)First-class: A | B
Intersection typesImplementing multiple interfaces (nominal)Structural merge: A & B
Literal typesenum (less flexible)"north" | "south"
Template literal typesNot available`on${Capitalize<T>}`
Conditional typesNot availableT extends U ? X : Y
Mapped typesManual class declarations{ [K in keyof T]: ... }
Type narrowingPattern matching (is, switch expressions)typeof, in, discriminant checks
User-defined type guardsis patterns (limited)value is T return type
infer keywordNot availableExtract components from matched types
satisfies operatorNot availableValidate without widening inference
Runtime type erasureTypes exist at runtime (CLR)All types erased at runtime (JS)
Nominal vs. structuralNominal (types must match by name)Structural (types match by shape)

Gotchas for .NET Engineers

Gotcha 1: TypeScript Types Do Not Exist at Runtime

This is the foundational error .NET engineers make. In C#, if you have a User variable, the CLR knows it is a User. You can use GetType(), is, as, and reflection. The type is real at runtime.

In TypeScript, types are erased during compilation. At runtime, you have JavaScript objects. There is no typeof user === User in TypeScript — typeof only returns "object", "string", "number", "boolean", "function", "symbol", or "undefined".

// This is a common mistake
type User = { id: string; name: string };

const response: User = await fetch("/api/users/1").then((r) => r.json());
// TypeScript is satisfied. But if the API returns { id: 123, name: null },
// TypeScript does NOT catch this. The assertion in the type annotation is a lie
// the compiler accepts without question.

// Correct approach: runtime validation with Zod
import { z } from "zod";

const UserSchema = z.object({ id: z.string(), name: z.string() });
const response = UserSchema.parse(await fetch("/api/users/1").then((r) => r.json()));
// Now throws if the shape is wrong — same guarantees as C# deserialization with validation

Every advanced type feature in this article operates at compile time only. They give you better intellisense and catch more errors during development. They provide zero protection against malformed data at runtime.


Gotcha 2: any Silently Disables the Type System, Including Your Advanced Types

When a value is typed as any, TypeScript stops checking it entirely. Advanced types have no effect when any appears in the chain:

type StrictUser = {
  id: string;
  name: string;
};

function getUser(): any {
  return { id: 123, name: null }; // wrong types — but any masks it
}

const user: StrictUser = getUser(); // TypeScript allows this — no error
console.log(user.name.toUpperCase()); // runtime error: Cannot read properties of null

In C#, the equivalent would be returning object and casting — and you have been trained to treat that as a code smell. Apply the same instinct to any in TypeScript. unknown is the safe alternative: it requires you to narrow before you use it.

function getUser(): unknown {
  return { id: 123, name: null };
}

const user = getUser();
user.name; // error: Object is of type 'unknown'

// Must narrow first:
if (isUser(user)) {
  user.name; // ok — narrowed to User
}

any also infects downstream types. A conditional type applied to any resolves to any. A mapped type over any resolves to any. The advanced features in this article do not protect you from any.


Gotcha 3: Complex Type Errors Are Hard to Interpret

C# type errors are usually specific: “Cannot implicitly convert type ‘int’ to ‘string’.” TypeScript type errors on complex generic types can be paragraphs long:

Type '{ id: string; name: string; }' is not assignable to type 'Partial<Omit<Required<User>, "createdAt" | "updatedAt">> & WithId<Pick<User, "role">>'.
  Type '{ id: string; name: string; }' is not assignable to type 'WithId<Pick<User, "role">>'.
    Property 'role' is missing in type '{ id: string; name: string; }' but required in type 'Pick<User, "role">'.

This is not TypeScript failing — it is correct. But it is hard to read. Two practices help:

First, decompose complex types into named intermediate types. Instead of one deeply nested type expression, give each layer a name. The error messages then reference the named type, which is easier to locate.

// Hard to debug when something goes wrong:
type UserPatchRequest = Partial<Omit<Required<User>, "id" | "createdAt">> & { _version: number };

// Easier to debug:
type MutableUserFields = Omit<User, "id" | "createdAt">;
type RequiredMutableFields = Required<MutableUserFields>;
type OptionalMutableFields = Partial<RequiredMutableFields>;
type UserPatchRequest = OptionalMutableFields & { _version: number };

Second, use the TypeScript playground or VS Code’s “Go to Type Definition” to inspect what a complex type resolves to. Hover over the type name — VS Code will show the fully expanded shape. This is the equivalent of evaluating a LINQ expression in the debugger to see the actual SQL.


Gotcha 4: Discriminated Unions Do Not Protect Against Missing Cases Without Exhaustiveness Checking

A switch statement over a discriminated union that does not cover all members compiles without error by default:

type Status = "pending" | "active" | "suspended" | "deleted";

function getLabel(status: Status): string {
  switch (status) {
    case "pending":  return "Pending";
    case "active":   return "Active";
    // Missing: "suspended" and "deleted"
    // TypeScript does not complain — the function just returns undefined at runtime
  }
}

Add explicit exhaustiveness checking to catch this at compile time:

function assertNever(value: never): never {
  throw new Error(`Unhandled case: ${JSON.stringify(value)}`);
}

function getLabel(status: Status): string {
  switch (status) {
    case "pending":   return "Pending";
    case "active":    return "Active";
    case "suspended": return "Suspended";
    case "deleted":   return "Deleted";
    default:          return assertNever(status); // compile error if a case is missing
  }
}

Now if you add "archived" to Status, every assertNever call becomes a compile error. This is the TypeScript equivalent of C# exhaustive pattern matching with switch expressions that require all cases.


Gotcha 5: Partial<T> Does Not Mean “Optional Fields for Updates” — It Means “All Fields Optional”

A common pattern from C# PATCH endpoints is to accept a DTO where all fields are optional and apply only the provided ones. Partial<T> looks like the right tool. It is close, but it accepts {} as a valid value — an update with no fields set:

type UserUpdateDto = Partial<User>;

function updateUser(id: string, dto: UserUpdateDto): Promise<User> {
  // dto could be {}, which is technically valid but useless
  // dto could also contain undefined values explicitly
  throw new Error("not implemented");
}

updateUser("abc", {}); // TypeScript allows this — is it intentional?

If you need “at least one field must be provided,” you need a more precise type:

// RequireAtLeastOne: at least one key from T must be present
type RequireAtLeastOne<T, Keys extends keyof T = keyof T> =
  Pick<T, Exclude<keyof T, Keys>> &
  { [K in Keys]-?: Required<Pick<T, K>> & Partial<Pick<T, Exclude<Keys, K>>> }[Keys];

type NonEmptyUserUpdate = RequireAtLeastOne<Partial<Omit<User, "id" | "createdAt">>>;

This type is complex enough that it belongs in a shared utility file with a comment explaining it. In practice, many teams skip this precision and add a runtime check instead — which is also defensible.


Hands-On Exercise

This exercise uses the types from a real NestJS + Prisma project structure.

Setup: You have a User model from Prisma:

// Generated by Prisma — do not modify
type User = {
  id: string;
  email: string;
  name: string;
  role: "admin" | "user" | "guest";
  isActive: boolean;
  createdAt: Date;
  updatedAt: Date;
};

Tasks:

  1. Define a CreateUserDto that omits id, createdAt, updatedAt, and isActive (which defaults to true on creation).

  2. Define an UpdateUserDto that makes all remaining fields optional and excludes id and the date fields. Then write a type guard function isNonEmptyUpdate(dto: UpdateUserDto): boolean that returns true only if at least one field is actually set (not undefined).

  3. Define a UserSummary type using Pick that includes only id, name, and role.

  4. Define a UserApiResponse as a discriminated union with these three shapes:

    • Success: { status: "success"; user: UserSummary }
    • Not found: { status: "not_found"; requestedId: string }
    • Forbidden: { status: "forbidden"; reason: string }
  5. Write a handleUserResponse(response: UserApiResponse): string function that returns a human-readable message for each case. Add exhaustiveness checking with assertNever.

  6. Define a UserEventMap type using template literal types and Record, where the keys are "user:created", "user:updated", "user:deleted", and each value is a function that receives the relevant data type.

Expected output: After completing the exercise, open VS Code and hover over UserApiResponse in the switch statement’s default case. It should show never — confirming exhaustiveness. Hover over each branch’s response.user or response.requestedId — TypeScript should show the narrowed type.


Quick Reference

Utility Types Cheat Sheet

Utility TypeWhat It ProducesC# Equivalent
Partial<T>All properties optionalManual DTO with ? properties
Required<T>All properties requiredManual DTO with non-null properties
Readonly<T>All properties readonlyIReadOnly<T> interface
Pick<T, K>Only keep keys KManual class with subset of fields
Omit<T, K>Remove keys KManual class without those fields
Record<K, V>Dictionary with typed keys and valuesDictionary<K, V>
Exclude<T, U>Remove U from union TNo equivalent
Extract<T, U>Keep only U from union TNo equivalent
NonNullable<T>Remove null and undefinedT with non-nullable reference type
ReturnType<T>Return type of function Ttypeof + reflection
Parameters<T>Tuple of parameter types of function TNo equivalent
Awaited<T>Resolved type of a PromiseNo equivalent
InstanceType<T>Instance type of a constructorNo equivalent

Pattern Decision Guide

SituationUse
Value is one of several typesUnion type A | B
Type must satisfy multiple contractsIntersection type A & B
Field limited to specific string valuesString literal union
Derive types from a string patternTemplate literal types
State machine or tagged variantsDiscriminated union
Type varies based on input typeConditional type
Derive new type from all keys of anotherMapped type
Extract component from matched typeinfer keyword
Validate shape without wideningsatisfies operator
Narrow broad type from runtime checksType narrowing / type guards

When to Use Each Feature

FeatureUse when…Avoid when…
Union typesMultiple valid types for a valueMore than ~5 arms without a discriminant
Intersection typesComposing non-overlapping shapesProperties with the same key exist in both
Literal typesSmall, fixed set of string/number valuesValues come from a database or config
Template literal typesConstructing string type combinationsThe combination count is very large
Discriminated unionsState machines, tagged variantsYou need inheritance-like method sharing
Conditional typesLibrary/utility code, typed overloadsApplication business logic
Mapped typesMultiple DTO variants from one sourceOne-off shape that only appears once
inferLibrary code, advanced generic utilitiesApplication controllers and services
satisfiesValidating without losing literal type inferenceWidened annotation type is acceptable
Type guardsUnknown input, API responses (without Zod)Zod is already in use for the same boundary

Further Reading

  • TypeScript Handbook: Advanced Types — The official reference for everything covered in this article. Read the “Conditional Types,” “Mapped Types,” and “Template Literal Types” sections.
  • TypeScript Utility Types Reference — Complete list of built-in utility types with examples.
  • TypeScript 4.9 Release Notes: satisfies — The original motivation and examples for the satisfies operator, from the TypeScript team.
  • Type Challenges — A curated set of type-system problems ranging from warm-up to extreme. Working through the medium-level challenges is the most effective way to develop fluency with conditional types and infer. Treat it as a kata repository, not a completionist exercise.

TypeScript Through the Layers: End-to-End Type Safety with Zod

For .NET engineers who know: EF Core models, DTOs, Data Annotations, FluentValidation, NSwag client generation, and ASP.NET Core model binding You’ll learn: How TypeScript achieves end-to-end type safety across the full stack — database through UI — using Prisma, Zod, NestJS, tRPC, and React, and why runtime validation is a structural requirement rather than an afterthought Time: 25-30 min read

This is one of the two most important articles in this curriculum. If you take one architectural insight from the entire course, it should be this: in the TypeScript stack, types are a compile-time fiction. Nothing enforces them at runtime unless you deliberately add that enforcement. Zod is how you add it. Everything else in this article builds on that premise.


The .NET Way (What You Already Know)

In ASP.NET Core, type safety flows naturally from the CLR’s enforced type system. When a request arrives at your API, the runtime itself participates in validation and type enforcement at every step.

Consider the standard layered flow in a .NET application:

// 1. EF Core model — the database schema expressed as a C# class
[Table("users")]
public class User
{
    public int Id { get; set; }
    public string Name { get; set; } = null!;
    public string Email { get; set; } = null!;
    public DateTime CreatedAt { get; set; }
}

// 2. DTO — a separate class that shapes the API surface
public class CreateUserDto
{
    [Required]
    [StringLength(100, MinimumLength = 2)]
    public string Name { get; set; } = null!;

    [Required]
    [EmailAddress]
    public string Email { get; set; } = null!;
}

// 3. Controller — ASP.NET binds the request body to CreateUserDto,
//    validates Data Annotations, and provides a strongly typed parameter.
//    [ApiController] returns 400 automatically if validation fails.
[ApiController]
[Route("api/[controller]")]
public class UsersController : ControllerBase
{
    private readonly AppDbContext _db;

    public UsersController(AppDbContext db) => _db = db;

    [HttpPost]
    public async Task<ActionResult<UserResponseDto>> Create([FromBody] CreateUserDto dto)
    {
        var user = new User
        {
            Name = dto.Name,
            Email = dto.Email,
            CreatedAt = DateTime.UtcNow
        };
        _db.Users.Add(user);
        await _db.SaveChangesAsync();

        return Ok(new UserResponseDto { Id = user.Id, Name = user.Name, Email = user.Email });
    }
}

// 4. NSwag generates a typed C# client from the OpenAPI spec.
//    The frontend (Blazor or a separate .NET client) uses it with full type safety.
var client = new UsersClient(httpClient);
var result = await client.CreateAsync(new CreateUserDto { Name = "Alice", Email = "alice@example.com" });
// result.Id, result.Name, result.Email — all typed

Notice what the CLR does for you behind the scenes:

  • [FromBody] binding physically constructs a CreateUserDto instance from the JSON body
  • Data Annotations are verified by the model binder at request time — the CLR will not call your action with an invalid dto
  • The User EF entity is a CLR object. If you assign user.Email = 42, the compiler refuses
  • NSwag reads your compiled assembly’s reflection metadata to generate the OpenAPI spec, which is then used to produce a typed client

The C# type system is enforced by the runtime. Types are not descriptions — they are contracts the CLR upholds.


The TypeScript Stack Way

TypeScript’s type system is erased at runtime. This single fact explains almost every architectural decision in this article.

After tsc compiles your TypeScript to JavaScript, every type annotation, every interface, every generic parameter vanishes. The resulting JavaScript has no concept of User, CreateUserDto, or string. What remains is a plain JavaScript object with properties. If JSON comes in from an HTTP request and you tell TypeScript it is a User, TypeScript believes you — because at runtime, TypeScript is not there to disagree.

This is not a flaw to work around. It is the trade-off TypeScript made: a rich, expressive type system at development time, with zero runtime overhead. The consequence is that you must construct your own runtime type enforcement deliberately. Zod is the standard tool for doing so.

Here is the same layered flow, rebuilt in TypeScript:

graph TD
    subgraph dotnet[".NET Stack"]
        D1["EF Model (C# class)"]
        D2["DTO (C# class)"]
        D3["Data Annotations (declarative)"]
        D4["Controller [FromBody]"]
        D5["NSwag → C# client"]
        D6["Blazor/Razor (C# types)"]
        D1 --> D2 --> D3 --> D4 --> D5 --> D6
    end

    subgraph ts["TypeScript Stack"]
        T1["Prisma schema → generated TS types"]
        T2["Zod schema → z.infer<> → TS type"]
        T3["Zod .parse() (runtime validation)"]
        T4["NestJS Pipe + Zod schema"]
        T5["tRPC router → typed client"]
        T6["React component (inferred TS types)"]
        T1 --> T2 --> T3 --> T4 --> T5 --> T6
    end

Each layer of that diagram is a section of this article. We will walk through each one — with complete, runnable code — building a single feature from schema to UI.

The feature: creating and retrieving a user with a name, email, and optional bio. Simple enough to be clear, realistic enough to surface the patterns you actually need.


Layer 1: Database to TypeScript — Prisma Schema

In EF Core, your C# entity class both defines the database schema and provides the CLR type you work with in code. In Prisma, these roles are filled by a schema.prisma file.

// prisma/schema.prisma

generator client {
  provider = "prisma-client-js"
}

datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}

model User {
  id        Int      @id @default(autoincrement())
  name      String   @db.VarChar(100)
  email     String   @unique @db.VarChar(255)
  bio       String?
  createdAt DateTime @default(now())
  updatedAt DateTime @updatedAt
}

After running prisma generate, the Prisma CLI generates a fully typed client from this schema. You never write the TypeScript types for your database models — they are derived from the schema automatically.

// This type is generated by Prisma — you do not write it manually
// It mirrors the model definition exactly
type User = {
  id: number;
  name: string;
  email: string;
  bio: string | null;  // Optional fields become T | null in Prisma
  createdAt: Date;
  updatedAt: Date;
};

The Prisma client then gives you typed query methods:

import { PrismaClient } from '@prisma/client';

const prisma = new PrismaClient();

// Return type is inferred as User | null — equivalent to .FindAsync() in EF
const user = await prisma.user.findUnique({ where: { id: 1 } });

// Return type is inferred as User[]
const users = await prisma.user.findMany({
  where: { email: { contains: '@example.com' } },
  orderBy: { createdAt: 'desc' },
});

This is the .NET equivalent: EF’s DbSet<User> mapped to a User class, with LINQ providing typed queries. The key difference is that Prisma derives the types from the schema file, while EF Core derives the schema from the C# class. Neither approach is superior — they solve the same problem from opposite directions.

EF Core vs. Prisma at a glance:

ConcernEF CorePrisma
Schema source of truthC# class (code-first)schema.prisma file
Type generationC# class IS the typeprisma generate creates types
MigrationsAdd-Migration, Update-Databaseprisma migrate dev, prisma migrate deploy
Query APILINQPrisma Client (fluent builder)
Change trackingYes (DbContext tracks entities)No (stateless by design)
Raw SQLFromSqlRaw, ExecuteSqlRawprisma.$queryRaw, prisma.$executeRaw
RelationsInclude(), ThenInclude()include: { relation: true }

Layer 2: The Zod Schema — Where Types Meet Runtime

This is the core concept of the article. Read this section carefully.

In .NET, your DTO class serves two purposes simultaneously:

  1. It declares the type — the C# class definition tells the compiler what shape the object has
  2. It carries validation rules — Data Annotations like [Required], [StringLength], [EmailAddress] declare what constitutes a valid value

Both happen in one class because the CLR can inspect the class at runtime via reflection. Data Annotations are not erased — they are metadata stored in the compiled assembly.

In TypeScript, you cannot do this with a single interface or type. A TypeScript interface can declare the shape, but at runtime that interface is gone. You need a separate mechanism for runtime validation. Historically, teams used two separate things: a TypeScript interface (for the type) plus a validation library like class-validator or Joi (for the runtime check). This creates duplication: you declare the same shape twice and must keep them synchronized manually.

Zod eliminates this duplication. You define a Zod schema once. From that schema, Zod infers the TypeScript type. You get runtime validation AND compile-time types from a single source of truth.

import { z } from 'zod';

// Define the schema once — this is both your validator AND your type source
const createUserSchema = z.object({
  name: z.string().min(2).max(100),
  email: z.string().email(),
  bio: z.string().max(500).optional(),
});

// Extract the TypeScript type from the schema — no separate interface needed
// This is equivalent to writing the CreateUserDto class in C#
type CreateUserInput = z.infer<typeof createUserSchema>;
// Inferred type:
// {
//   name: string;
//   email: string;
//   bio?: string | undefined;
// }

// Runtime validation — equivalent to Data Annotations being checked by [ApiController]
const result = createUserSchema.safeParse({
  name: 'Alice',
  email: 'not-an-email',
  bio: 'Software engineer',
});

if (!result.success) {
  // result.error is a ZodError with structured field-level errors
  console.log(result.error.flatten());
  // {
  //   fieldErrors: { email: ['Invalid email'] },
  //   formErrors: []
  // }
}

if (result.success) {
  // result.data is typed as CreateUserInput — TypeScript knows the shape
  const { name, email, bio } = result.data;
  // name: string, email: string, bio: string | undefined
}

The two Zod methods you will use constantly:

  • schema.parse(data) — validates and returns typed data, or throws a ZodError if invalid. Use inside try/catch blocks or NestJS pipes.
  • schema.safeParse(data) — never throws; returns { success: true, data: T } or { success: false, error: ZodError }. Use when you need to handle validation errors gracefully.

Building a richer schema that mirrors FluentValidation:

import { z } from 'zod';

// Equivalent to a FluentValidation AbstractValidator<CreateUserDto>
const createUserSchema = z.object({
  name: z
    .string({ required_error: 'Name is required' })
    .min(2, 'Name must be at least 2 characters')
    .max(100, 'Name must not exceed 100 characters')
    .trim(),

  email: z
    .string({ required_error: 'Email is required' })
    .email('Must be a valid email address')
    .toLowerCase(),

  bio: z
    .string()
    .max(500, 'Bio must not exceed 500 characters')
    .optional(),

  // Equivalent to [Range(18, 120)] in Data Annotations
  age: z
    .number()
    .int('Age must be a whole number')
    .min(18, 'Must be at least 18 years old')
    .max(120)
    .optional(),
});

// You can also define a response schema — equivalent to a UserResponseDto
const userResponseSchema = z.object({
  id: z.number().int().positive(),
  name: z.string(),
  email: z.string().email(),
  bio: z.string().nullable(),  // nullable() means string | null (not undefined)
  createdAt: z.string().datetime(),
});

// Types inferred from the schemas
type CreateUserInput = z.infer<typeof createUserSchema>;
type UserResponse = z.infer<typeof userResponseSchema>;

// Transformations — equivalent to [FromBody] + AutoMapper mapping in one step
const createUserSchema = z.object({
  name: z.string().min(2).max(100).trim(),
  email: z.string().email().toLowerCase().trim(),
  bio: z.string().max(500).optional(),
}).transform((data) => ({
  ...data,
  // Any transform applied during parsing
  displayName: data.name.split(' ')[0],
}));

Where to put your schemas in a monorepo:

In a monorepo with a shared package, schemas that are used on both the frontend and backend belong in a shared location:

packages/
  shared/
    src/
      schemas/
        user.schema.ts      // createUserSchema, userResponseSchema
        product.schema.ts
      index.ts
apps/
  api/                      // NestJS — imports from @repo/shared
  web/                      // Next.js — imports from @repo/shared

This gives you the equivalent of sharing FluentValidation rules between a .NET API and a Blazor WebAssembly frontend — the same validation logic runs on both sides. In .NET this is hard because your validation lives in C# and your frontend is likely a different language. In a TypeScript monorepo, the shared validation is automatic.


Layer 3: API Validation with NestJS Pipes

NestJS pipes are the equivalent of ASP.NET Core’s model binding pipeline. A pipe receives the raw request data (already parsed from JSON by the NestJS framework), validates it, transforms it, and hands the result to your controller action.

Without a validation pipe, your controller receives any. With a Zod pipe, it receives a correctly typed, validated object.

Here is the Zod pipe implementation:

// src/common/pipes/zod-validation.pipe.ts
import {
  PipeTransform,
  Injectable,
  ArgumentMetadata,
  BadRequestException,
} from '@nestjs/common';
import { ZodSchema, ZodError } from 'zod';

@Injectable()
export class ZodValidationPipe implements PipeTransform {
  constructor(private readonly schema: ZodSchema) {}

  transform(value: unknown, _metadata: ArgumentMetadata) {
    const result = this.schema.safeParse(value);

    if (!result.success) {
      // Format Zod errors into a structure the frontend can consume
      const errors = this.formatZodErrors(result.error);
      throw new BadRequestException({
        message: 'Validation failed',
        errors,
      });
    }

    return result.data;
  }

  private formatZodErrors(error: ZodError) {
    return error.errors.map((e) => ({
      path: e.path.join('.'),
      message: e.message,
      code: e.code,
    }));
  }
}

Now the controller:

// src/users/users.controller.ts
import {
  Controller,
  Post,
  Get,
  Body,
  Param,
  ParseIntPipe,
  UsePipes,
  HttpCode,
  HttpStatus,
} from '@nestjs/common';
import { z } from 'zod';
import { ZodValidationPipe } from '../common/pipes/zod-validation.pipe';
import { UsersService } from './users.service';
import { createUserSchema, userResponseSchema } from '@repo/shared';

// Import the inferred type — the controller parameter is fully typed
type CreateUserInput = z.infer<typeof createUserSchema>;
type UserResponse = z.infer<typeof userResponseSchema>;

@Controller('users')
export class UsersController {
  constructor(private readonly usersService: UsersService) {}

  // @UsePipes with ZodValidationPipe is the equivalent of [FromBody] + [ApiController].
  // The pipe validates the body against createUserSchema before the method is called.
  // If validation fails, ZodValidationPipe throws BadRequestException (-> 400).
  // If validation passes, `dto` is typed as CreateUserInput — not `any`.
  @Post()
  @HttpCode(HttpStatus.CREATED)
  @UsePipes(new ZodValidationPipe(createUserSchema))
  async create(@Body() dto: CreateUserInput): Promise<UserResponse> {
    return this.usersService.create(dto);
  }

  @Get(':id')
  async findOne(@Param('id', ParseIntPipe) id: number): Promise<UserResponse> {
    return this.usersService.findOne(id);
  }
}

And the service, where Prisma and Zod types meet:

// src/users/users.service.ts
import { Injectable, NotFoundException } from '@nestjs/common';
import { PrismaService } from '../prisma/prisma.service';
import { createUserSchema, userResponseSchema } from '@repo/shared';
import { z } from 'zod';

type CreateUserInput = z.infer<typeof createUserSchema>;
type UserResponse = z.infer<typeof userResponseSchema>;

@Injectable()
export class UsersService {
  constructor(private readonly prisma: PrismaService) {}

  async create(dto: CreateUserInput): Promise<UserResponse> {
    // dto.name and dto.email are already validated and typed
    // No need to re-check: if we are here, the pipe succeeded
    const user = await this.prisma.user.create({
      data: {
        name: dto.name,
        email: dto.email,
        bio: dto.bio ?? null,
      },
    });

    // Map the Prisma User type to the UserResponse shape
    // This is the equivalent of AutoMapper or a manual DTO projection in .NET
    return {
      id: user.id,
      name: user.name,
      email: user.email,
      bio: user.bio,
      createdAt: user.createdAt.toISOString(),
    };
  }

  async findOne(id: number): Promise<UserResponse> {
    const user = await this.prisma.user.findUnique({ where: { id } });

    if (!user) {
      // Equivalent to returning NotFound() in a .NET controller
      throw new NotFoundException(`User with id ${id} not found`);
    }

    return {
      id: user.id,
      name: user.name,
      email: user.email,
      bio: user.bio,
      createdAt: user.createdAt.toISOString(),
    };
  }
}

The NestJS module wires everything together, analogous to registering services in IServiceCollection:

// src/users/users.module.ts
import { Module } from '@nestjs/common';
import { UsersController } from './users.controller';
import { UsersService } from './users.service';
import { PrismaModule } from '../prisma/prisma.module';

@Module({
  imports: [PrismaModule],
  controllers: [UsersController],
  providers: [UsersService],
})
export class UsersModule {}

At this point, the backend layer is complete. The flow is:

  1. JSON body arrives
  2. NestJS parses JSON to unknown
  3. ZodValidationPipe calls createUserSchema.safeParse(value)
  4. On success, controller.create(dto) receives a correctly typed CreateUserInput
  5. UsersService.create(dto) calls Prisma with typed inputs
  6. Prisma returns a typed User from the database
  7. The service returns UserResponse to the controller
  8. NestJS serializes it to JSON

No any types. No runtime surprises. Validated at the boundary.


Layer 4: API to Frontend — tRPC vs. OpenAPI

This is where .NET and the TypeScript stack diverge most sharply. In .NET, the standard approach is:

  1. ASP.NET generates an OpenAPI spec (via Swashbuckle or NSwag)
  2. NSwag generates a typed C# client from the spec
  3. The client is used in Blazor or a separate client project

There is a code generation step — a separate artifact that must be regenerated whenever the API changes.

tRPC eliminates code generation entirely.

tRPC works by sharing a TypeScript type — the router definition — between the server and the client. The client knows the exact input and output types of every procedure because they share the same type at build time, through TypeScript’s module system. No HTTP, no OpenAPI, no generated client. Just TypeScript inference across the network boundary.

The trade-off is coupling: both client and server must be TypeScript and must be in the same monorepo (or at least share a type package). If your API is consumed by multiple clients in different languages, tRPC is not viable. If you have a .NET backend, tRPC is not an option — see Article 4B.1 for that architecture.

For a monorepo where the frontend and API are both TypeScript, tRPC is the best available option for type safety.

Here is the complete tRPC setup:

Server — define the router:

// apps/api/src/trpc/router.ts
import { initTRPC, TRPCError } from '@trpc/server';
import { z } from 'zod';
import { PrismaClient } from '@prisma/client';
import { createUserSchema, userResponseSchema } from '@repo/shared';

const prisma = new PrismaClient();

// Initialize tRPC — equivalent to configuring ASP.NET Core services
const t = initTRPC.create();

export const router = t.router;
export const publicProcedure = t.procedure;

// Define the user router — equivalent to a UsersController
const usersRouter = router({
  // Equivalent to [HttpPost] Create
  create: publicProcedure
    .input(createUserSchema)       // Zod schema — validates input automatically
    .output(userResponseSchema)    // Zod schema — validates output (optional but recommended)
    .mutation(async ({ input }) => {
      // input is typed as CreateUserInput — TypeScript knows the exact shape
      const user = await prisma.user.create({
        data: {
          name: input.name,
          email: input.email,
          bio: input.bio ?? null,
        },
      });

      return {
        id: user.id,
        name: user.name,
        email: user.email,
        bio: user.bio,
        createdAt: user.createdAt.toISOString(),
      };
    }),

  // Equivalent to [HttpGet("{id}")] FindOne
  findOne: publicProcedure
    .input(z.object({ id: z.number().int().positive() }))
    .output(userResponseSchema)
    .query(async ({ input }) => {
      const user = await prisma.user.findUnique({
        where: { id: input.id },
      });

      if (!user) {
        // tRPC error codes map to HTTP status codes
        throw new TRPCError({
          code: 'NOT_FOUND',
          message: `User with id ${input.id} not found`,
        });
      }

      return {
        id: user.id,
        name: user.name,
        email: user.email,
        bio: user.bio,
        createdAt: user.createdAt.toISOString(),
      };
    }),
});

// The root router — equivalent to the full controller registration in Program.cs
export const appRouter = router({
  users: usersRouter,
});

// This type is exported and shared with the client
// It contains no runtime code — only types
export type AppRouter = typeof appRouter;

Server — expose as HTTP endpoint (NestJS or standalone Express):

// apps/api/src/main.ts (standalone Express, for simplicity)
import express from 'express';
import * as trpcExpress from '@trpc/server/adapters/express';
import { appRouter } from './trpc/router';

const app = express();

app.use(
  '/trpc',
  trpcExpress.createExpressMiddleware({
    router: appRouter,
    // createContext can inject request context (auth, db, etc.)
    createContext: ({ req, res }) => ({ req, res }),
  }),
);

app.listen(3001, () => console.log('API running on port 3001'));

Client — consume the router with full type inference:

// apps/web/src/lib/trpc.ts
import { createTRPCReact } from '@trpc/react-query';
import type { AppRouter } from '../../api/src/trpc/router';
// Note: we import a TYPE — no runtime code crosses the module boundary

export const trpc = createTRPCReact<AppRouter>();

The AppRouter import is a type-only import. At runtime, the web app does not import any server code. TypeScript extracts only the type information, which it uses to provide autocomplete and type checking in the client. When the server’s router changes, TypeScript immediately flags incompatible usages in the client — before running a single test.


Layer 5: Frontend Component with TanStack Query

Now the full end-to-end picture, from a React component’s perspective.

// apps/web/src/components/CreateUserForm.tsx
'use client';  // Next.js — this runs in the browser

import { useForm } from 'react-hook-form';
import { zodResolver } from '@hookform/resolvers/zod';
import { trpc } from '../lib/trpc';
import { createUserSchema } from '@repo/shared';
import { z } from 'zod';

// The same schema that runs on the server now runs on the client too.
// The form is validated with the same rules — zero duplication.
type CreateUserInput = z.infer<typeof createUserSchema>;

export function CreateUserForm() {
  // React Hook Form + Zod resolver
  // zodResolver connects the Zod schema to React Hook Form's validation engine
  // This is the TypeScript equivalent of Blazor's EditForm + DataAnnotationsValidator
  const {
    register,
    handleSubmit,
    reset,
    formState: { errors, isSubmitting },
  } = useForm<CreateUserInput>({
    resolver: zodResolver(createUserSchema),
    defaultValues: {
      name: '',
      email: '',
      bio: '',
    },
  });

  // tRPC mutation — equivalent to calling the typed NSwag client in .NET
  // The input type is inferred from the router — TypeScript enforces it
  const createUser = trpc.users.create.useMutation({
    onSuccess: (data) => {
      // data is typed as UserResponse — TypeScript knows id, name, email, bio, createdAt
      console.log(`Created user: ${data.name} (id: ${data.id})`);
      reset();
    },
    onError: (error) => {
      // tRPC maps server errors to structured error objects
      console.error('Failed to create user:', error.message);
    },
  });

  const onSubmit = (data: CreateUserInput) => {
    // TypeScript enforces that data matches CreateUserInput
    // The tRPC client enforces that it matches the server's input schema
    createUser.mutate(data);
  };

  return (
    <form onSubmit={handleSubmit(onSubmit)} noValidate>
      <div>
        <label htmlFor="name">Name</label>
        <input id="name" type="text" {...register('name')} />
        {/* errors.name is typed — TypeScript knows its shape */}
        {errors.name && <p role="alert">{errors.name.message}</p>}
      </div>

      <div>
        <label htmlFor="email">Email</label>
        <input id="email" type="email" {...register('email')} />
        {errors.email && <p role="alert">{errors.email.message}</p>}
      </div>

      <div>
        <label htmlFor="bio">Bio (optional)</label>
        <textarea id="bio" {...register('bio')} />
        {errors.bio && <p role="alert">{errors.bio.message}</p>}
      </div>

      <button type="submit" disabled={isSubmitting || createUser.isPending}>
        {createUser.isPending ? 'Creating...' : 'Create User'}
      </button>

      {createUser.isError && (
        <p role="alert">{createUser.error.message}</p>
      )}
    </form>
  );
}

Reading a user:

// apps/web/src/components/UserProfile.tsx
'use client';

import { trpc } from '../lib/trpc';

interface UserProfileProps {
  userId: number;
}

export function UserProfile({ userId }: UserProfileProps) {
  // trpc.users.findOne.useQuery — equivalent to TanStack Query's useQuery,
  // but with the input and output types inferred from the tRPC router.
  // data is typed as UserResponse | undefined — never `any`
  const { data: user, isLoading, error } = trpc.users.findOne.useQuery(
    { id: userId },
    {
      // TanStack Query options — same as if you wrote useQuery manually
      staleTime: 5 * 60 * 1000,  // 5 minutes
      retry: 1,
    },
  );

  if (isLoading) return <div>Loading...</div>;

  if (error) {
    // error.data?.code is 'NOT_FOUND' if the user doesn't exist
    if (error.data?.code === 'NOT_FOUND') {
      return <div>User not found.</div>;
    }
    return <div>Error: {error.message}</div>;
  }

  if (!user) return null;

  // user is typed as UserResponse — all fields are known and typed
  return (
    <article>
      <h1>{user.name}</h1>
      <p>{user.email}</p>
      {user.bio && <p>{user.bio}</p>}
      <time dateTime={user.createdAt}>
        Member since {new Date(user.createdAt).toLocaleDateString()}
      </time>
    </article>
  );
}

Wiring tRPC into the Next.js app:

// apps/web/src/app/providers.tsx
'use client';

import { QueryClient, QueryClientProvider } from '@tanstack/react-query';
import { httpBatchLink } from '@trpc/client';
import { trpc } from '../lib/trpc';
import { useState } from 'react';

export function Providers({ children }: { children: React.ReactNode }) {
  const [queryClient] = useState(() => new QueryClient());
  const [trpcClient] = useState(() =>
    trpc.createClient({
      links: [
        httpBatchLink({
          url: process.env.NEXT_PUBLIC_API_URL + '/trpc',
        }),
      ],
    }),
  );

  return (
    <trpc.Provider client={trpcClient} queryClient={queryClient}>
      <QueryClientProvider client={queryClient}>
        {children}
      </QueryClientProvider>
    </trpc.Provider>
  );
}
// apps/web/src/app/layout.tsx
import { Providers } from './providers';

export default function RootLayout({ children }: { children: React.ReactNode }) {
  return (
    <html lang="en">
      <body>
        <Providers>{children}</Providers>
      </body>
    </html>
  );
}

Layer 6: Shared Schemas and Monorepo Structure

The full power of Zod becomes clear in a monorepo. The schema defined once in packages/shared runs in three places:

  1. NestJS API — validates incoming request bodies (server-side, runtime)
  2. tRPC router — validates procedure inputs and outputs (server-side, runtime)
  3. React form — validates user input before submission (client-side, runtime)

The TypeScript type derived from the schema is used by:

  1. NestJS controllers and services — compile-time type checking
  2. tRPC client — compile-time type checking on all queries and mutations
  3. React Hook Form — compile-time type checking on form field names and values

Here is the shared package structure:

// packages/shared/src/schemas/user.schema.ts
import { z } from 'zod';

export const createUserSchema = z.object({
  name: z.string().min(2).max(100).trim(),
  email: z.string().email().toLowerCase().trim(),
  bio: z.string().max(500).optional(),
});

export const updateUserSchema = createUserSchema.partial();
// Equivalent to Partial<CreateUserDto> — all fields become optional.
// This is the PATCH pattern. In .NET you would define a separate UpdateUserDto.

export const userResponseSchema = z.object({
  id: z.number().int().positive(),
  name: z.string(),
  email: z.string().email(),
  bio: z.string().nullable(),
  createdAt: z.string().datetime(),
});

// Export the inferred types alongside the schemas
export type CreateUserInput = z.infer<typeof createUserSchema>;
export type UpdateUserInput = z.infer<typeof updateUserSchema>;
export type UserResponse = z.infer<typeof userResponseSchema>;
// packages/shared/src/index.ts
export * from './schemas/user.schema';
// Export all schemas and types from a single entry point
// packages/shared/package.json
{
  "name": "@repo/shared",
  "version": "0.0.1",
  "main": "./src/index.ts",
  "exports": {
    ".": "./src/index.ts"
  }
}

Both the API and the web app declare a dependency on @repo/shared in their package.json:

{
  "dependencies": {
    "@repo/shared": "workspace:*",
    "zod": "^3.23.0"
  }
}

The workspace:* protocol is pnpm’s way of referencing another package in the same monorepo — equivalent to a project reference in a .csproj file.


Key Differences

Concept.NET / ASP.NET CoreTypeScript Stack
Runtime type enforcementCLR enforces types nativelyTypeScript types erased; Zod provides runtime enforcement
DTO definitionSeparate C# classZod schema + z.infer<> — one source for both
Request validationData Annotations + [ApiController]ZodValidationPipe calls schema.safeParse()
Validation sharingSeparate FluentValidation + JS librarySame Zod schema imported on frontend and backend
API client type safetyNSwag generates typed client from OpenAPI spectRPC shares types directly via TypeScript module system
Code generationNSwag runs as a build steptRPC requires none; types flow from router definition
ORM type sourceC# class defines both schema and typeschema.prisma defines schema; prisma generate creates types
Null handlingNullable reference types (C# 8+)z.nullable() vs z.optional() — distinct concepts
Validation errorsModelStateDictionary → 400 responseZodErrorBadRequestException → 400 response
Cross-language clientNSwag works across any languagetRPC only works TypeScript-to-TypeScript

Gotchas for .NET Engineers

1. z.optional() and z.nullable() are different things

In C#, a nullable reference type (string?) means the value can be null. TypeScript and Zod distinguish between two different concepts:

  • z.optional() — the field may be absent from the object entirely (its value is undefined)
  • z.nullable() — the field must be present, but its value may be null

These are separate and both different from a required, non-null field:

const schema = z.object({
  required: z.string(),              // Must be present, must be a string
  optional: z.string().optional(),   // May be absent; if present, must be a string
  nullable: z.string().nullable(),   // Must be present; may be null
  both: z.string().nullish(),        // May be absent OR null (equivalent to string? in C#)
});

type T = z.infer<typeof schema>;
// {
//   required: string;
//   optional?: string | undefined;
//   nullable: string | null;
//   both?: string | null | undefined;
// }

Prisma also distinguishes these. In Prisma’s generated types, a field marked String? in the schema becomes string | null in TypeScript (nullable, not optional). This trips up .NET engineers who expect string? to mean “might not be there” — in TypeScript it specifically means “is there but its value is null.”

When mapping between Prisma types and Zod schemas, be deliberate:

// Prisma generates: bio: string | null
// The correct Zod equivalent for an API response:
const responseSchema = z.object({
  bio: z.string().nullable(),  // Correct — matches Prisma's generated type
  // NOT:
  bio: z.string().optional(), // Wrong — would expect undefined, not null
});

2. TypeScript does not validate API responses — you must use Zod explicitly

In C#, if your controller returns a UserResponseDto, the CLR verifies that the returned object IS a UserResponseDto. If you accidentally return null where a non-nullable field is expected, the compiler or runtime catches it.

In TypeScript, annotating a function as async getUser(): Promise<UserResponse> does not cause TypeScript to validate the returned value at runtime. TypeScript trusts you. If the Prisma query returns a shape that does not match UserResponse, TypeScript cannot detect this at runtime — the mismatch travels to the client silently.

Two places where this matters:

Parsing API responses on the client (without tRPC): If you are consuming an external API without tRPC, always parse the response through a Zod schema. Do not trust that the API returns what its documentation says.

// Without Zod — TypeScript believes you, but there is no runtime check
const response = await fetch('/api/users/1');
const user = (await response.json()) as UserResponse; // Dangerous cast — no validation

// With Zod — validated at the boundary
const response = await fetch('/api/users/1');
const raw = await response.json();
const user = userResponseSchema.parse(raw); // Throws if the shape is wrong

tRPC output validation: tRPC’s .output(schema) validates what the server returns before sending it to the client. This is optional but recommended because it catches server-side bugs before they become client-side bugs.

findOne: publicProcedure
  .input(z.object({ id: z.number() }))
  .output(userResponseSchema)  // Validates the return value before sending
  .query(async ({ input }) => {
    // If this returns a shape that doesn't match userResponseSchema,
    // tRPC throws a server error rather than sending malformed data
    return usersService.findOne(input.id);
  }),

3. Zod’s type inference is deep — do not write types manually

A common mistake from .NET engineers learning Zod is to define the Zod schema and then also write a separate TypeScript interface that manually mirrors it:

// Do NOT do this — it defeats the purpose
const createUserSchema = z.object({
  name: z.string().min(2).max(100),
  email: z.string().email(),
});

// Manual duplication — now you must keep these in sync manually
// This is exactly the problem Zod was designed to solve
interface CreateUserInput {
  name: string;
  email: string;
}

Use z.infer<> exclusively:

// Correct — one source of truth
const createUserSchema = z.object({
  name: z.string().min(2).max(100),
  email: z.string().email(),
});

type CreateUserInput = z.infer<typeof createUserSchema>;
// TypeScript derives the interface automatically from the schema
// Change the schema -> type changes automatically

This also applies to response types. Define a userResponseSchema, infer the type, and use that everywhere. Do not write a UserResponse interface separately.

4. tRPC is not suitable for public APIs or multi-language consumers

tRPC’s type safety mechanism works by importing TypeScript types from the server into the client at build time. This requires both ends to be TypeScript in the same build environment.

Consequences:

  • tRPC cannot be used if your API is consumed by a mobile app written in Swift or Kotlin
  • tRPC cannot be used if your API is consumed by a .NET service
  • tRPC cannot be used if you are building a public API that third parties consume

In these cases, use NestJS with @nestjs/swagger to generate an OpenAPI spec, and use openapi-typescript or orval on the client to generate typed bindings — the same pattern as NSwag in .NET, but targeting TypeScript instead of C#. See Article 4B.4 for the full OpenAPI bridge architecture.

The rule: tRPC for internal TypeScript-to-TypeScript communication, OpenAPI for anything else.

5. ZodError structures are nested — format them before returning from your API

ZodError contains an errors array of ZodIssue objects. Each issue has a path array representing the nested location of the error (e.g., ['address', 'city'] for a nested object). If you return a raw ZodError to the client, it is verbose and inconsistent with typical API error response formats.

The ZodValidationPipe shown earlier flattens these errors into a clean structure. Make sure your global exception filter in NestJS also handles ZodError consistently:

// src/common/filters/zod-exception.filter.ts
import { ExceptionFilter, Catch, ArgumentsHost, BadRequestException } from '@nestjs/common';
import { ZodError } from 'zod';
import { Response } from 'express';

@Catch(ZodError)
export class ZodExceptionFilter implements ExceptionFilter {
  catch(exception: ZodError, host: ArgumentsHost) {
    const ctx = host.switchToHttp();
    const response = ctx.getResponse<Response>();

    response.status(400).json({
      statusCode: 400,
      message: 'Validation failed',
      errors: exception.errors.map((e) => ({
        field: e.path.join('.'),
        message: e.message,
      })),
    });
  }
}

Register it globally in main.ts:

import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import { ZodExceptionFilter } from './common/filters/zod-exception.filter';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  app.useGlobalFilters(new ZodExceptionFilter());
  await app.listen(3001);
}

bootstrap();

6. Zod schemas are not automatically used by NestJS — you must wire the pipe

A NestJS controller does not automatically validate its @Body() with a Zod schema just because you annotate the parameter with a type derived from Zod. You must explicitly apply the ZodValidationPipe. If you forget, the body parameter receives unvalidated any from the request, and TypeScript’s type annotation is meaningless — the type says CreateUserInput but the runtime value is whatever the client sent.

This is the TypeScript equivalent of forgetting [ApiController] on a controller in ASP.NET Core — your Data Annotations exist but nothing calls them.

Two ways to apply the pipe — per-handler (explicit) or globally:

// Option 1: Per-handler — most flexible
@Post()
@UsePipes(new ZodValidationPipe(createUserSchema))
async create(@Body() dto: CreateUserInput) { ... }

// Option 2: Per-parameter — more granular
@Post()
async create(@Body(new ZodValidationPipe(createUserSchema)) dto: CreateUserInput) { ... }

// Option 3: Global — catches all @Body() parameters, requires each pipe instance to carry its schema
// Less common with Zod; more common with class-validator's ValidationPipe
app.useGlobalPipes(new ValidationPipe());

For teams using class-validator (the other common approach), the global ValidationPipe is standard. With Zod, per-handler or per-parameter pipes are more explicit and less magical — easier to audit and debug.


Hands-On Exercise

Build the complete type-safe user feature described in this article, from scratch. The goal is to have every layer connected and to observe TypeScript catching type errors across the stack.

Setup:

# Create a pnpm monorepo
mkdir user-demo && cd user-demo
pnpm init

# Create the workspace configuration
cat > pnpm-workspace.yaml << 'EOF'
packages:
  - 'apps/*'
  - 'packages/*'
EOF

# Create packages
mkdir -p packages/shared/src/schemas
mkdir -p apps/api/src
mkdir -p apps/web/src

# Install Zod in shared
cd packages/shared && pnpm init && pnpm add zod

Step 1 — Shared schemas. Create packages/shared/src/schemas/user.schema.ts with the createUserSchema, updateUserSchema, and userResponseSchema from this article. Export the inferred types.

Step 2 — Prisma schema. In apps/api, initialize Prisma (pnpm add prisma @prisma/client && pnpm prisma init), write the User model, run prisma migrate dev, and run prisma generate.

Step 3 — NestJS API. Build the UsersController, UsersService, and ZodValidationPipe. Verify that the controller’s @Body() parameter type matches CreateUserInput and that the service’s return type matches UserResponse.

Step 4 — Introduce a deliberate type error. In the service’s create method, attempt to return an object that includes a field that does not exist on UserResponse. Observe that TypeScript flags the error before you run the application.

Step 5 — tRPC router. Add the tRPC router with users.create and users.findOne. Export AppRouter.

Step 6 — React form. Build CreateUserForm using React Hook Form with zodResolver. Verify that the form field names are type-checked against CreateUserInput — attempt to use register('nome') (Italian for “name”) and observe the TypeScript error.

Step 7 — Break the contract. Rename a field in createUserSchema — for example, rename name to fullName. Without changing anything else, observe which files TypeScript immediately flags as errors. This is the value of end-to-end type safety: the compiler surfaces the full blast radius of a schema change before you run a single test.


Quick Reference

.NET ConceptTypeScript EquivalentNotes
[Required], [StringLength]z.string().min(n).max(m)Zod validator replaces Data Annotations
[EmailAddress]z.string().email()Same semantic, different syntax
[Range(min, max)]z.number().min(n).max(m)Identical concept
DTO class definitionz.object({ ... })Schema IS the type source
DTO class + typez.infer<typeof schema>Extract TypeScript type from schema
[FromBody] + [ApiController]ZodValidationPipe on @Body()Must be applied explicitly; not automatic
DbSet<User>prisma.userTyped Prisma client accessor
EF model classschema.prisma model blockGenerates types via prisma generate
T? nullable reference typez.string().nullish().nullable() = null, .optional() = undefined, .nullish() = both
Partial<T> for PATCH DTOschema.partial()Makes all schema fields optional
NSwag generated clienttRPC React clientNo codegen; types flow via TypeScript modules
UserResponseDto classz.infer<typeof userResponseSchema>Same pattern as input schemas
FluentValidation shared libraryZod schema in shared monorepo packageImported by both API and frontend
ModelStateDictionary errorsZodError.flatten().fieldErrorsStructured field-level error map
schema.parse(data)N/A (Zod-specific)Throws on failure; use in try/catch
schema.safeParse(data)N/A (Zod-specific)Never throws; returns { success, data/error }
prisma.user.findUnique()dbContext.Users.FindAsync()Returns typed result or null
prisma.user.create()dbContext.Users.Add() + SaveChanges()Create and return in one call
TRPCError({ code: 'NOT_FOUND' })NotFoundExceptiontRPC maps to 404 HTTP status
trpc.users.findOne.useQuery()useQuery from TanStack QueryInput/output typed from router
trpc.users.create.useMutation()useMutation from TanStack QueryInput/output typed from router

Key Zod validators at a glance:

z.string()              // any string
z.string().min(n)       // minimum length
z.string().max(n)       // maximum length
z.string().email()      // valid email format
z.string().url()        // valid URL format
z.string().uuid()       // UUID format
z.string().regex(/.../) // custom regex
z.string().trim()       // transform: trim whitespace
z.string().toLowerCase()// transform: lowercase
z.number()              // any number
z.number().int()        // integers only
z.number().positive()   // > 0
z.number().min(n)       // >= n
z.number().max(n)       // <= n
z.boolean()             // true or false
z.date()                // Date object
z.enum(['a', 'b'])      // union of literals
z.array(z.string())     // array of strings
z.object({ ... })       // object with shape
z.union([schemaA, schemaB]) // either schema
z.optional()            // field may be undefined
z.nullable()            // field may be null
z.nullish()             // field may be null or undefined
z.default(value)        // use value if field is absent
z.transform(fn)         // transform after validation
z.refine(fn, msg)       // custom validation predicate
z.superRefine(fn)       // custom validation with full access to ZodError

Further Reading

  • Zod Documentation — The authoritative reference. The “Basic usage” and “Schema methods” sections cover everything used in this article.
  • tRPC Documentation — Start with “Quickstart” and “Routers”. The React Query integration section covers useQuery and useMutation.
  • Prisma Documentation — TypeScript — Covers type generation in detail, including how Prisma’s types relate to your schema.
  • React Hook Form — Zod Integration — The schema validation section shows the zodResolver integration used in the form example in this article.

Modules & Imports: using/namespace vs. import/export

For .NET engineers who know: namespace, using directives, assembly references, and the way C# automatically shares all symbols within a namespace You’ll learn: How TypeScript’s ES module system works, why explicit imports of every symbol are required, and the practical patterns (barrel files, path aliases, dynamic imports) you will use daily Time: 12 min read


The .NET Way (What You Already Know)

In C#, code organization is layered. A namespace groups types logically — it is a naming scope, not a file boundary. A single namespace can span dozens of files across multiple assemblies, and a single file can contain multiple namespaces. The using directive at the top of a file brings that namespace’s symbols into scope for the rest of the file.

// UserService.cs — the namespace declaration tells the compiler where this type lives
namespace MyApp.Services;

using MyApp.Models;     // Bring in the Models namespace
using MyApp.Data;       // Bring in the Data namespace
using System.Threading; // Bring in a BCL namespace from the System assembly

public class UserService
{
    // 'User' and 'UserDto' are from MyApp.Models
    // 'AppDbContext' is from MyApp.Data
    // All available without any further import syntax
    public async Task<UserDto> GetUser(int id, AppDbContext db)
    {
        var user = await db.Users.FindAsync(id);
        return new UserDto(user);
    }
}

The critical behavior here: once you write using MyApp.Models;, every public type in that namespace is available. You do not import User and UserDto individually — you import the namespace and get everything in it. The linker resolves which assemblies contain those namespaces via project references in your .csproj.

Global usings (introduced in C# 10) take this further — a single global using MyApp.Models; makes a namespace available across every file in the project without any per-file declaration.

This “import a namespace, get everything in it” model is deeply ingrained in how .NET engineers think about code organization. TypeScript works completely differently.


The TypeScript Way

ES Modules: The Foundation

TypeScript uses the ES module system (ECMAScript Modules, or ESM). In this model, every file is its own module. A symbol (a function, class, interface, constant, or type) is private to its file by default. To make it visible to other files, you must explicitly export it. To use it in another file, you must explicitly import it.

There is no concept equivalent to “namespace” as a shared scope spanning multiple files. The file boundary is the module boundary.

// user.service.ts
// 'User' and 'UserDto' are not available just because they're in the same project.
// You must import each symbol explicitly.
import { User } from './models/user.model';
import { UserDto } from './models/user.dto';
import { AppDataSource } from '../data/app-data-source';

export class UserService {
    async getUser(id: number): Promise<UserDto> {
        const user = await AppDataSource.findOne(User, { where: { id } });
        return new UserDto(user);
    }
}

The C# side-by-side makes the contrast clear:

C#TypeScript
using MyApp.Models;import { User, UserDto } from './models';
Imports a namespace (all symbols)Imports named symbols individually
Resolution via assembly references in .csprojResolution via file paths or node_modules
Compiler resolves namespaces across the projectModule loader resolves by file path
global using makes symbols project-wideNo direct equivalent (barrel files come close)
Namespace can span multiple filesOne file = one module

Named Exports vs. Default Exports

TypeScript (inheriting from JavaScript’s module history) supports two export styles: named exports and default exports.

// --- Named exports ---
// user.model.ts
export interface User {
    id: number;
    email: string;
    name: string;
}

export type UserRole = 'admin' | 'member' | 'viewer';

export function createUser(email: string, name: string): User {
    return { id: Date.now(), email, name };
}

// To import named exports, use braces and the exact name:
import { User, UserRole, createUser } from './user.model';

// You can alias if there's a naming conflict:
import { User as UserModel } from './user.model';
// --- Default export ---
// user.service.ts
export default class UserService {
    async getUser(id: number): Promise<User> { /* ... */ }
}

// To import a default export, no braces — and the name is up to the importer:
import UserService from './user.service';
import MyUserService from './user.service'; // Same module, different local name — works

We use named exports in all project code. Default exports exist and you will encounter them in older codebases, React component files (some conventions use them), and third-party libraries — but they introduce friction:

  • IDEs and refactoring tools handle named exports more reliably
  • export default allows the importer to use any name, which makes grep-based search for usages harder
  • A file with a default export provides no signal about what it contains until you open it
  • Auto-import tools in VS Code work more predictably with named exports

The practical rule: always use named exports in your own code. When consuming libraries that use default exports, import them normally — you cannot change the library.

// Prefer this:
export class UserService { /* ... */ }
export interface UserDto { /* ... */ }

// Not this:
export default class UserService { /* ... */ }

Barrel Files (index.ts)

With named exports and explicit per-symbol imports, import paths can become verbose. A deeply nested module might require:

import { UserService } from '../../services/users/user.service';
import { UserDto } from '../../models/users/user.dto';
import { CreateUserInput } from '../../models/users/create-user.input';

Barrel files solve this. A barrel is an index.ts file that re-exports symbols from the files in its directory, providing a single clean entry point for the module.

// src/users/index.ts — the barrel file
export { UserService } from './user.service';
export { UserDto } from './user.dto';
export { CreateUserInput } from './create-user.input';
export type { UserRole } from './user.model';

Now all consumers import from the directory instead of individual files:

// Before barrel:
import { UserService } from '../../services/users/user.service';
import { UserDto } from '../../models/users/user.dto';

// After barrel:
import { UserService, UserDto } from '../../services/users';
// Node's module resolver sees '../../services/users', looks for index.ts, finds it

This is the closest TypeScript analog to the C# using MyApp.Services; pattern. The barrel defines the public API of the directory module, and consumers import from that boundary rather than from internal files.

Barrel file gotcha: Barrel files can introduce circular dependency issues and can bloat bundle sizes if a bundler cannot tree-shake effectively. Keep barrel files at meaningful architectural boundaries (a feature module, a shared utilities package) rather than creating one for every directory. See the Gotchas section for more detail.

Path Aliases

Even with barrel files, relative paths from deep files (../../../shared/utils) are fragile — moving a file breaks every import that referenced it by relative path. Path aliases solve this by letting you define short, absolute-looking import paths.

In tsconfig.json:

{
  "compilerOptions": {
    "baseUrl": ".",
    "paths": {
      "@/*": ["src/*"],
      "@shared/*": ["src/shared/*"],
      "@features/*": ["src/features/*"]
    }
  }
}

Now any file in the project can import using the alias:

// Instead of this (relative, fragile):
import { UserService } from '../../../features/users/user.service';

// Use this (aliased, stable):
import { UserService } from '@features/users/user.service';
import { formatDate } from '@shared/utils/date';

Path aliases are a TypeScript compiler feature — they tell tsc how to resolve imports. However, the compiled JavaScript still needs a runtime that understands these aliases. Depending on your toolchain:

  • NestJS (Node.js): Use tsconfig-paths package and register it at startup, or use the --paths support in ts-node
  • Next.js: Next.js reads tsconfig.json paths automatically and configures its webpack/Turbopack accordingly
  • Vitest: Configure resolve.alias in vitest.config.ts to match your tsconfig.json paths
  • Vite: Same as Vitest (same config structure)

The @/ prefix is a convention (not a requirement) for the project root alias. You will see it in virtually every Next.js and NestJS project.

// The @ is just a convention — it signals "this is an alias, not a relative path"
import { Button } from '@/components/ui/button';
import { db } from '@/lib/db';
import { authOptions } from '@/lib/auth';

CommonJS vs. ESM: The Legacy Situation

This is the part of the module story that no one enjoys explaining. TypeScript’s type system is clean and consistent. The underlying JavaScript module system is not — it has two incompatible formats with a messy coexistence story.

CommonJS (CJS) is Node.js’s original module format, designed in 2009:

// CommonJS (you will see this in older Node.js code and many npm packages)
const express = require('express');
const { Router } = require('express');

module.exports = { UserService };
module.exports.UserService = UserService;

ES Modules (ESM) is the standard format defined in ES2015 (ES6), now the universal standard:

// ESM (what TypeScript compiles to when targeting modern environments)
import express from 'express';
import { Router } from 'express';

export { UserService };
export default UserService;

The problem: Node.js supported CJS for years before ESM stabilized. Many npm packages still ship CJS-only. When you import a CJS package from ESM code, Node.js has an interop layer, but it has edge cases. When TypeScript compiles your code, it can target either format depending on your tsconfig.json’s module setting.

Where we are now (2026):

SettingWhen to use
"module": "ESNext"Next.js, Vite-based projects, modern frontend
"module": "CommonJS"NestJS (default), older Node.js projects
"module": "NodeNext"New Node.js projects that want native ESM with full Node.js compatibility

For most practical work, you will not think about this directly — NestJS defaults to CJS, Next.js handles ESM automatically, and the frameworks abstract the difference. You will care about it when:

  1. A package you want to use ships ESM-only and your NestJS project targets CJS (fix: use dynamic import() or find a CJS version)
  2. You see ERR_REQUIRE_ESM in your Node.js logs (a CJS require() tried to load an ESM-only package)
  3. You are configuring a new project from scratch and must choose the right tsconfig.json module setting

The signal in any package’s documentation: look for "type": "module" in package.json (ESM-only) or the presence of separate .cjs and .mjs file extensions (dual-mode package).

Dynamic Imports

Standard import statements are static — they execute at module load time and are resolved before any code runs. TypeScript also supports dynamic imports via import(), which returns a Promise and defers loading until runtime.

// Static import — resolved at module load time
import { HeavyLibrary } from 'heavy-library';

// Dynamic import — deferred until this code path executes
async function generateReport(): Promise<void> {
    // Only load the PDF library if this function is actually called
    const { PDFDocument } = await import('pdf-lib');
    const doc = await PDFDocument.create();
    // ...
}

This is the equivalent of lazy loading an assembly in .NET — you defer the cost until the code is needed.

Common uses:

// Conditional loading based on environment
const logger = process.env.NODE_ENV === 'production'
    ? await import('./loggers/structured-logger')
    : await import('./loggers/console-logger');

// Route-level code splitting in Next.js (handled automatically by the framework,
// but you can trigger it manually with dynamic())
import dynamic from 'next/dynamic';

const HeavyChart = dynamic(() => import('./components/HeavyChart'), {
    loading: () => <Skeleton />,
    ssr: false, // Don't render on the server
});

// Plugin-style architecture
async function loadPlugin(name: string) {
    const plugin = await import(`./plugins/${name}`);
    return plugin.default;
}

Dynamic imports are also the standard solution for loading ESM-only packages from a CJS module:

// In a CJS NestJS module, importing an ESM-only package
async function useEsmOnlyPackage() {
    const { someFunction } = await import('esm-only-package');
    return someFunction();
}

How Module Resolution Works

When TypeScript (and Node.js at runtime) sees import { UserService } from './users', it needs to find the actual file. The resolution algorithm checks, in order:

  1. ./users.ts
  2. ./users.tsx
  3. ./users/index.ts
  4. ./users/index.tsx
  5. (For non-relative imports) node_modules/users/...

For non-relative imports like import { Injectable } from '@nestjs/common', the resolver looks in node_modules starting from the importing file’s directory and walking up until it finds the package.

The moduleResolution setting in tsconfig.json controls the exact algorithm:

SettingBehavior
"moduleResolution": "node"Classic Node.js resolution. Works for CJS. Common default.
"moduleResolution": "bundler"For projects using a bundler (Vite, Webpack). More permissive.
"moduleResolution": "NodeNext"Strict ESM-compatible resolution. Requires explicit extensions in imports.

For most NestJS projects, "node" is correct. For Next.js and Vite projects, "bundler" is what the framework sets. You will rarely need to change these defaults.

Circular Dependencies

Circular dependencies occur when module A imports from module B, and module B imports from module A. TypeScript will often not error on circular imports — the code might even appear to work. But at runtime, one of the modules will receive undefined for the imported symbol because the circular module was not fully initialized when the import was evaluated.

// --- PROBLEMATIC circular dependency ---

// user.service.ts
import { OrderService } from './order.service'; // A imports B

export class UserService {
    constructor(private orderService: OrderService) {}
}

// order.service.ts
import { UserService } from './user.service'; // B imports A

export class OrderService {
    constructor(private userService: UserService) {}
}

At startup, one of these will receive undefined as its import. The symptoms are obscure runtime errors that do not match what the types say.

The fix is architectural: extract shared types or logic into a third module that both depend on.

// shared-types.ts — no dependencies on user or order services
export interface UserSummary { id: number; name: string; }
export interface OrderSummary { id: number; userId: number; total: number; }

// user.service.ts — depends only on shared types
import { UserSummary } from './shared-types';
export class UserService { /* ... */ }

// order.service.ts — depends only on shared types
import { OrderSummary, UserSummary } from './shared-types';
export class OrderService { /* ... */ }

In NestJS specifically, the framework’s DI container can help with true circular service dependencies using forwardRef(), but this is a code smell — circular service dependencies almost always indicate a domain modeling problem.


Key Differences

ConceptC#TypeScript
Organizing codenamespace — logical grouping, spans filesModule file — one file = one module boundary
Importing symbolsusing MyApp.Models; brings all symbolsimport { User, UserDto } from './models' — explicit per-symbol
Default visibilityAll public types in a namespace accessible with usingAll symbols private by default; export makes them visible
Project-wide importsglobal usingNo direct equivalent; barrel files reduce per-file imports
Shorter import pathsN/A (namespace hierarchy is the path)Path aliases in tsconfig.json (@/src/...)
Lazy loadingAssembly lazy loading or MEFimport() dynamic import returns a Promise
Circular referencesCompiler error for ambiguous references; generally preventedTypeScript may not error; runtime undefined is the symptom
Module formatSingle managed code format (IL)Two formats: CJS and ESM; interop layer between them

Gotchas for .NET Engineers

1. Everything in a Namespace Is Not Automatically Available

This is the most common mental model error .NET engineers make when starting with TypeScript. In C#, if User and UserService are both in MyApp.Services, and you write using MyApp.Services;, both are available. You do not think about this — it is invisible.

In TypeScript, if User is defined in user.model.ts and UserService is defined in user.service.ts, they know nothing about each other until you write an explicit import. Being in the same directory, having similar names, or being part of the same logical feature — none of this creates any automatic relationship.

// This will not work — TypeScript will error with "Cannot find name 'User'"
// even if user.model.ts is in the same directory
export class UserService {
    getUser(id: number): User { // ERROR: User is not imported
        // ...
    }
}

// This is required:
import { User } from './user.model'; // Explicit import of every symbol used

export class UserService {
    getUser(id: number): User { // Works
        // ...
    }
}

The consequence for productivity: you will spend time adding imports, especially early on. VS Code’s TypeScript language server auto-import helps significantly — when you type a symbol name it recognizes, it offers to add the import automatically (Ctrl+. or the lightbulb icon). Enable “Auto Import Suggestions” in VS Code settings and use it constantly.

2. Barrel Files Can Silently Break Tree-Shaking and Create Initialization Ordering Bugs

Barrel files look like a clean solution to verbose imports, but they have two production-grade issues.

Tree-shaking degradation: When a bundler (webpack, Rollup, Vite) processes a barrel file, it must include any module that is re-exported, even if the consumer only uses one symbol. If your barrel re-exports from 20 files and a page only uses one symbol, a poorly configured bundler may include all 20 modules in the bundle. Modern bundlers with ESM and proper "sideEffects": false in package.json handle this correctly, but it requires verification.

Initialization ordering bugs: When barrel files create circular re-export chains, modules may initialize in an unexpected order, causing undefined at runtime. This is especially tricky in NestJS module definitions where providers import from barrels that eventually re-export the importing module.

The mitigation: use barrel files at intentional architectural boundaries (the public API of a feature module), not automatically for every directory. If you encounter undefined for a symbol that your types say should be an object, suspect a barrel-induced circular initialization issue and remove the barrel temporarily to diagnose.

3. Path Aliases Require Configuration in Every Tool — Not Just tsconfig.json

When you add a path alias to tsconfig.json, TypeScript’s type checker resolves imports correctly. But TypeScript’s compiler output is JavaScript — it does not transform alias paths. The runtime (Node.js, a bundler, a test runner) also needs to know about the aliases.

This means your alias configuration must be registered in every tool that processes the code:

// tsconfig.json — TypeScript type checking
{
  "compilerOptions": {
    "paths": { "@/*": ["src/*"] }
  }
}
// vitest.config.ts — test runner must also know
import { defineConfig } from 'vitest/config';
import path from 'path';

export default defineConfig({
  resolve: {
    alias: {
      '@': path.resolve(__dirname, './src'),
    },
  },
  test: { /* ... */ },
});
// For NestJS running with ts-node: register tsconfig-paths at startup
// In package.json scripts:
// "start:dev": "ts-node -r tsconfig-paths/register src/main.ts"

If you add an alias to tsconfig.json and immediately get “Cannot find module ‘@/…’ “ at runtime or in tests, the runtime-level alias registration is missing. The TypeScript compiler accepted it (because it reads tsconfig.json), but the tool executing the JavaScript does not know about the alias.

4. Default Exports Make Refactoring Harder Than It Looks

If you come from a React codebase using export default, you will encounter components that can be imported under any name:

// These all import the same default export:
import UserCard from './UserCard';
import Card from './UserCard';
import WhateverIWant from './UserCard';

This makes automated refactoring unreliable. If you rename the component inside the file, no import needs to update — but now the file’s name, the internal name, and the import alias are all potentially different. When searching for usages of UserCard in a large codebase, you will miss the files that imported it as Card.

Named exports enforce a single name for a symbol across the codebase, making search, refactoring, and code review more reliable.

5. The require() / import Mismatch Will Produce Runtime Errors, Not Type Errors

If you are in a CJS context (common in NestJS by default) and you require() an ESM-only package, you will get this at runtime:

Error [ERR_REQUIRE_ESM]: require() of ES Module node_modules/some-package/index.js
not supported. Instead change the require of index.js to a dynamic import() which is
available in all CommonJS modules.

TypeScript’s type checker cannot detect this — from the type system’s perspective, both formats look the same. The error only surfaces at runtime, often in production if your test suite does not exercise that code path.

The fix depends on the situation:

// Option 1: Use dynamic import() to load ESM from CJS
const { someFunction } = await import('esm-only-package');

// Option 2: Find a CJS-compatible version or an alternative package
// Check npm for packages like 'some-package-cjs'

// Option 3: Convert your NestJS project to ESM (non-trivial, breaks some NestJS internals)
// Generally not recommended unless you have a specific reason

Check a package’s package.json for "type": "module" before adding it as a dependency if you are in a CJS project.


Hands-On Exercise

This exercise builds a mini NestJS feature module using the module patterns covered in this article. You will practice named exports, barrel files, and path aliases together.

Setup: Assume you have a working NestJS project. If not, nest new module-exercise --package-manager pnpm creates one.

Task: Create a notifications feature module with the following structure:

src/
  notifications/
    dto/
      create-notification.dto.ts    — Zod schema + inferred type
      notification-response.dto.ts  — Response type
    entities/
      notification.entity.ts        — Notification class/interface
    notifications.service.ts        — Service with basic CRUD stubs
    notifications.controller.ts     — REST controller
    notifications.module.ts         — NestJS module definition
    index.ts                        — Barrel file (public API of this module)

Step 1: Create src/notifications/entities/notification.entity.ts with a named export:

export interface Notification {
    id: string;
    userId: string;
    message: string;
    read: boolean;
    createdAt: Date;
}

Step 2: Create src/notifications/dto/create-notification.dto.ts using Zod for both validation and type inference (as covered in Article 2.3):

import { z } from 'zod';

export const createNotificationSchema = z.object({
    userId: z.string().uuid(),
    message: z.string().min(1).max(500),
});

export type CreateNotificationDto = z.infer<typeof createNotificationSchema>;

Step 3: Create the service with explicit imports — no implicit namespace access:

// src/notifications/notifications.service.ts
import { Injectable } from '@nestjs/common';
import { Notification } from './entities/notification.entity';
import { CreateNotificationDto } from './dto/create-notification.dto';

@Injectable()
export class NotificationsService {
    private notifications: Notification[] = [];

    create(dto: CreateNotificationDto): Notification {
        const notification: Notification = {
            id: crypto.randomUUID(),
            userId: dto.userId,
            message: dto.message,
            read: false,
            createdAt: new Date(),
        };
        this.notifications.push(notification);
        return notification;
    }

    findByUser(userId: string): Notification[] {
        return this.notifications.filter(n => n.userId === userId);
    }
}

Step 4: Create the barrel file:

// src/notifications/index.ts
export { NotificationsModule } from './notifications.module';
export { NotificationsService } from './notifications.service';
export type { Notification } from './entities/notification.entity';
export type { CreateNotificationDto } from './dto/create-notification.dto';

Step 5: Add a path alias to tsconfig.json:

{
  "compilerOptions": {
    "paths": {
      "@notifications/*": ["src/notifications/*"],
      "@notifications": ["src/notifications/index.ts"]
    }
  }
}

Step 6: From any other module in the project, verify you can import from the barrel using the alias:

// src/some-other-feature/some.service.ts
import { NotificationsService } from '@notifications';
import type { Notification } from '@notifications';

Verification: Run pnpm tsc --noEmit to confirm no type errors. Then run the app with pnpm start:dev and verify the module loads without resolution errors.

Extension: Add a second module (email-sender) that the NotificationsService depends on, and deliberately create a circular import between them. Observe the runtime behavior, then resolve the circular dependency by extracting a shared interface into a notifications.types.ts file that both modules import from.


Quick Reference

.NET ConceptTypeScript EquivalentNotes
namespace MyApp.ModelsFile is the module; no namespace declaration neededThe file path IS the module identity
using MyApp.Models;import { User, UserDto } from './models';Must name each symbol explicitly
using MyApp.Models; (all symbols)import * as Models from './models';Avoid — prevents tree-shaking
global using MyApp.Services;Barrel file + path aliasNot exactly equivalent; reduces per-file verbosity
Public class (visible outside assembly)export class FooDefault is private to file without export
Internal class (same assembly only)No direct equivalentBarrel files simulate this by not re-exporting
Assembly reference in .csprojListed in package.json dependenciesResolved from node_modules
Lazy loading (MEF / Assembly.Load)const m = await import('./my-module')Returns a Promise; resolved at call time
Project reference in .csprojpnpm workspace dependency"@myapp/shared": "workspace:*" in package.json
using alias (using Svc = MyApp.Services.UserService)import { UserService as Svc } from '...'Inline alias in the import statement
Re-export (forwarding)export { Foo } from './foo'Used in barrel files
Default export (avoid)export default class FooImport with import Foo from '...' (no braces)
Named export (prefer)export class FooImport with import { Foo } from '...'
Path aliastsconfig.json paths + runtime registrationMust configure in tsconfig AND vitest/bundler

Further Reading

  • TypeScript Module Documentation — The official handbook covers every module mode, resolution algorithm, and tsconfig option with examples. The “Modules” chapter is the authoritative reference.
  • Node.js ESM Documentation — Covers the CJS/ESM interop rules, .cjs/.mjs file extensions, and the "type": "module" package flag. Essential reading when you hit ERR_REQUIRE_ESM.
  • TypeScript Module Resolution — The tsconfig reference for moduleResolution, module, paths, and baseUrl. Use this when configuring a new project from scratch or diagnosing a resolution error.
  • Barrel Files: Good or Bad? — A balanced analysis of barrel file trade-offs, with specific guidance on when they help and when they hurt bundle size and build performance.

Decorators & Metadata: Attributes vs. Decorators

For .NET engineers who know: C# attributes, System.Reflection, custom Attribute subclasses, and how ASP.NET Core uses attributes for routing, authorization, and model binding You’ll learn: How TypeScript decorators differ from C# attributes, how NestJS uses them to build an ASP.NET-like framework, how the reflect-metadata library bridges the gap, and why the TC39 Stage 3 decorator spec matters even if you never touch it directly Time: 15-20 minutes

The surface syntax is deceptively similar. [HttpGet("/users")] in C# and @Get('/users') in TypeScript look like the same concept with different brackets. They are not the same. Understanding what actually happens when TypeScript decorators execute — and why — is what separates an engineer who can read NestJS code from one who can write and debug it confidently.


The .NET Way (What You Already Know)

In C#, an attribute is a class that inherits from System.Attribute. When you apply [HttpGet("/users")] to a method, the CLR stores metadata about that method in the assembly’s manifest. Nothing executes at definition time. The attribute instance is not created until something calls GetCustomAttributes() at runtime, typically the ASP.NET Core framework during startup.

// Definition: just a class inheriting Attribute. No magic.
[AttributeUsage(AttributeTargets.Method)]
public class LogExecutionTimeAttribute : Attribute
{
    public string Label { get; }

    public LogExecutionTimeAttribute(string label = "")
    {
        Label = label;
    }
}

// Application: stores metadata. Nothing runs here.
public class UserService
{
    [LogExecutionTime("GetUser")]
    public async Task<User> GetUserAsync(int id)
    {
        return await _repo.FindAsync(id);
    }
}

// Consumption: reflection reads the metadata at runtime, usually during startup.
var method = typeof(UserService).GetMethod("GetUserAsync");
var attr = method?.GetCustomAttribute<LogExecutionTimeAttribute>();
if (attr != null)
{
    Console.WriteLine($"Label: {attr.Label}"); // "GetUser"
}

The key properties of C# attributes:

  • Pure metadata — no code runs when you apply an attribute. The attribute instance is constructed only on demand, by a caller using reflection.
  • Type-safeAttributeUsage restricts where they can be applied. The compiler enforces this.
  • Read-only at runtime — attributes describe the target; they cannot modify it.
  • Framework-driven consumption — ASP.NET Core reads attributes during startup to build route tables, authorization policies, and filter pipelines.

This model is clean, predictable, and entirely separate from runtime behavior. The attribute and the code it decorates are independent.


The TypeScript Way

Decorators Are Functions, Not Metadata

TypeScript decorators are functions that execute at class definition time. When the JavaScript engine loads the module containing a decorated class, the decorator functions run immediately. They receive the decorated target as an argument and can — and often do — modify it.

This is the critical difference: applying a decorator is a function call disguised as declarative syntax.

// A class decorator receives the constructor function as its argument.
// It runs when the module is first loaded, before any instance is created.
function LogClass(constructor: Function) {
    console.log(`Class defined: ${constructor.name}`);
    // You can modify the prototype, wrap the constructor, or do anything else here.
}

@LogClass  // This calls LogClass(UserService) at module load time.
class UserService {
    getUser(id: number) { /* ... */ }
}

// Console output appears immediately when this module is imported:
// "Class defined: UserService"

TypeScript supports four kinds of decorators, each receiving a different target:

Class decorators receive the constructor:

function Singleton<T extends { new(...args: unknown[]): object }>(constructor: T) {
    let instance: InstanceType<T> | null = null;
    return class extends constructor {
        constructor(...args: unknown[]) {
            if (instance) return instance;
            super(...args);
            instance = this as InstanceType<T>;
        }
    };
}

@Singleton
class DatabaseConnection {
    connect() { /* ... */ }
}

Method decorators receive the prototype, the method name, and the property descriptor — giving full control over the method’s implementation:

function LogExecutionTime(label: string) {
    // The outer function is the decorator factory — it returns the actual decorator.
    return function (
        target: object,
        propertyKey: string,
        descriptor: PropertyDescriptor
    ) {
        const originalMethod = descriptor.value as (...args: unknown[]) => unknown;

        // Replace the method implementation entirely.
        descriptor.value = async function (...args: unknown[]) {
            const start = performance.now();
            const result = await originalMethod.apply(this, args);
            const duration = performance.now() - start;
            console.log(`[${label}] ${propertyKey}: ${duration.toFixed(2)}ms`);
            return result;
        };

        return descriptor;
    };
}

class UserService {
    @LogExecutionTime('UserService')
    async getUser(id: number): Promise<User> {
        return this.repo.findById(id);
    }
}

This is different from a C# attribute in a critical way: the LogExecutionTime decorator actually wraps the getUser method. The original implementation is replaced. No reflection needed at call time — the modification is baked in when the class loads.

Property decorators receive the prototype and property name (no descriptor — they cannot directly access the value):

function Required(target: object, propertyKey: string) {
    // Convention: store metadata somewhere for later validation use.
    const requiredProperties: string[] =
        Reflect.getMetadata('required', target) ?? [];
    requiredProperties.push(propertyKey);
    Reflect.defineMetadata('required', requiredProperties, target);
}

class CreateUserDto {
    @Required
    name: string = '';

    @Required
    email: string = '';

    age?: number;
}

Parameter decorators receive the prototype, the method name, and the parameter index:

function Body(target: object, methodName: string, parameterIndex: number) {
    const existingParams: number[] =
        Reflect.getMetadata('body:params', target, methodName) ?? [];
    existingParams.push(parameterIndex);
    Reflect.defineMetadata('body:params', existingParams, target, methodName);
}

class UserController {
    createUser(@Body dto: CreateUserDto): Promise<User> {
        return this.userService.create(dto);
    }
}

The reflect-metadata Library

You cannot build the C# attribute pattern in TypeScript purely with decorator functions. Decorator functions execute and return — there is no built-in storage for arbitrary metadata. The reflect-metadata package provides that storage: a WeakMap-backed API for associating arbitrary key-value data with classes, methods, and parameters.

npm install reflect-metadata
// Must be imported once at application entry point.
import 'reflect-metadata';

The API mirrors System.Reflection closely (not coincidentally — it was designed for exactly this use case):

// Store metadata
Reflect.defineMetadata(metadataKey, metadataValue, target);
Reflect.defineMetadata(metadataKey, metadataValue, target, propertyKey);

// Read metadata
const value = Reflect.getMetadata(metadataKey, target);
const value = Reflect.getMetadata(metadataKey, target, propertyKey);

// Check existence
const exists = Reflect.hasMetadata(metadataKey, target);

// List all keys
const keys = Reflect.getMetadataKeys(target);

There is one critically useful built-in metadata key: design:type, design:paramtypes, and design:returntype. When emitDecoratorMetadata: true is set in tsconfig.json, the TypeScript compiler emits type information as reflect-metadata entries — giving the runtime access to the TypeScript types that are normally erased.

// tsconfig.json must have:
// "experimentalDecorators": true,
// "emitDecoratorMetadata": true

import 'reflect-metadata';

function Injectable() {
    return function (constructor: Function) {
        // The TypeScript compiler has emitted the constructor parameter types
        // as metadata. We can read them here.
        const paramTypes = Reflect.getMetadata('design:paramtypes', constructor);
        console.log(paramTypes);
        // [UserRepository, LoggerService] — the actual constructor function references
    };
}

@Injectable()
class UserService {
    constructor(
        private readonly userRepo: UserRepository,
        private readonly logger: LoggerService,
    ) {}
}

This is how NestJS’s DI container resolves constructor dependencies without any explicit [FromServices] or registration calls specifying types — the type information is emitted into metadata by the TypeScript compiler and read back at runtime.

Side-by-Side: The Same Pattern in C# and TypeScript

// C# — ASP.NET Core routing and DI
[ApiController]
[Route("api/[controller]")]
public class UsersController : ControllerBase
{
    private readonly IUserService _userService;

    // DI container reads IUserService type from constructor,
    // resolves it from the service container.
    public UsersController(IUserService userService)
    {
        _userService = userService;
    }

    [HttpGet("{id:int}")]
    [Authorize(Roles = "Admin")]
    public async Task<ActionResult<UserDto>> GetUser(
        [FromRoute] int id)
    {
        var user = await _userService.GetUserAsync(id);
        return user is null ? NotFound() : Ok(user);
    }
}
// TypeScript — NestJS routing and DI
// The structure and intent are nearly identical.
// The mechanism is completely different.
import { Controller, Get, Param, UseGuards, ParseIntPipe } from '@nestjs/common';

@Controller('users')              // Stores route prefix in reflect-metadata
export class UsersController {
    constructor(
        // NestJS reads design:paramtypes metadata to resolve UserService.
        private readonly userService: UserService,
    ) {}

    @Get(':id')                   // Stores route + HTTP method in metadata
    @UseGuards(AdminGuard)        // Stores guard reference in metadata
    async getUser(
        @Param('id', ParseIntPipe) id: number,  // Stores parameter binding info
    ): Promise<UserDto> {
        return this.userService.getUser(id);
    }
}

From the outside, reading NestJS code feels like reading ASP.NET Core code. Under the hood, every @Get(':id') call has already run (at module load time) and stashed metadata in reflect-metadata. The NestJS bootstrap process then reads all that metadata to construct the router table, DI container, and middleware pipeline — exactly what ASP.NET Core does during startup when it calls GetCustomAttributes() across your assemblies.


How NestJS Uses Decorators: The Full Picture

NestJS is built almost entirely on decorators. Understanding the pattern lets you debug it when it breaks and extend it when needed.

Module Decorators and the DI Container

import { Module } from '@nestjs/common';

@Module({
    imports: [TypeOrmModule.forFeature([User])],
    controllers: [UsersController],
    providers: [UserService, UserRepository],
    exports: [UserService],
})
export class UsersModule {}

@Module() stores its configuration object in reflect-metadata on the UsersModule class. When NestJS bootstraps, it reads this metadata to construct the module graph — equivalent to IServiceCollection registrations in Program.cs, but driven by metadata rather than imperative calls.

@Injectable() marks a class as eligible for DI resolution and causes design:paramtypes to be read when constructing instances. It is your services.AddScoped<UserService>().

Custom Decorators for Real Use Cases

This is where TypeScript decorators become genuinely powerful. You can compose them to build reusable cross-cutting concerns.

A logging decorator with zero framework dependency:

function Logged(level: 'debug' | 'info' | 'warn' = 'info') {
    return function (
        target: object,
        propertyKey: string,
        descriptor: PropertyDescriptor,
    ) {
        const original = descriptor.value as (...args: unknown[]) => Promise<unknown>;

        descriptor.value = async function (...args: unknown[]) {
            const className = target.constructor.name;
            console[level](`${className}.${propertyKey} called`, { args });
            try {
                const result = await original.apply(this, args);
                console[level](`${className}.${propertyKey} returned`, { result });
                return result;
            } catch (error) {
                console.error(`${className}.${propertyKey} threw`, { error });
                throw error;
            }
        };
    };
}

class UserService {
    @Logged('info')
    async createUser(dto: CreateUserDto): Promise<User> {
        return this.repo.save(dto);
    }
}

A NestJS guard composed into a custom decorator:

In NestJS, you frequently combine multiple decorators into one. This is the equivalent of creating a composite ASP.NET attribute:

import { applyDecorators, SetMetadata, UseGuards } from '@nestjs/common';
import { Roles } from './roles.decorator';
import { AuthGuard } from './auth.guard';
import { RolesGuard } from './roles.guard';

// In C#, you'd create a composite attribute or a policy.
// In NestJS, you compose decorators into one with applyDecorators.
export function Auth(...roles: string[]) {
    return applyDecorators(
        SetMetadata('roles', roles),
        UseGuards(AuthGuard, RolesGuard),
    );
}

// Usage — reads exactly like a C# composite attribute:
@Controller('admin')
export class AdminController {
    @Get('dashboard')
    @Auth('admin', 'superadmin')  // One decorator, composed behavior
    getDashboard() { /* ... */ }
}

A custom parameter decorator:

import { createParamDecorator, ExecutionContext } from '@nestjs/common';

// Equivalent to writing a custom IModelBinder in ASP.NET Core,
// but applied at the parameter level with a decorator.
export const CurrentUser = createParamDecorator(
    (data: unknown, ctx: ExecutionContext) => {
        const request = ctx.switchToHttp().getRequest();
        return request.user as AuthenticatedUser;
    },
);

@Controller('profile')
export class ProfileController {
    @Get()
    @UseGuards(AuthGuard)
    getProfile(@CurrentUser() user: AuthenticatedUser): ProfileDto {
        return this.profileService.getProfile(user.id);
    }
}

Legacy Decorators vs. TC39 Stage 3 Decorators

This is an area where the JS ecosystem is in active transition, and as a .NET engineer you need to understand why there are two different decorator systems and which one you are using.

The Legacy System: experimentalDecorators

Everything shown so far uses the legacy decorator system, enabled with "experimentalDecorators": true in tsconfig.json. This system was implemented by TypeScript in 2015 based on an early TC39 proposal that was subsequently changed significantly. It is non-standard — it predates the final spec and differs from it in meaningful ways.

NestJS, class-validator, class-transformer, and TypeORM all use the legacy system. It is stable in practice. It will not be removed from TypeScript. But it is not — and never will be — the standard.

The Standard System: TC39 Stage 3

The TC39 decorator proposal reached Stage 3 in 2022 and was finalized. TypeScript 5.0 (released March 2023) shipped support for standard decorators alongside the legacy system. Standard decorators use the same @ syntax but work differently:

AspectLegacy (experimentalDecorators)Standard (TC39 Stage 3)
tsconfig.json flag"experimentalDecorators": trueNo flag needed (TS 5.0+)
emitDecoratorMetadataSupported, used by NestJS DINot supported — no metadata emission
Method decorator signature(target, key, descriptor)(value, context) — context is an object
Class decorator returnCan return new classCan return new class
Initialization orderOuter-to-inner, bottom-upDefined precisely in spec
Field decoratorsLimited, no initial value accessFull access to initializer
Metadata APIreflect-metadata (third-party)Native Symbol.metadata (stage 3 proposal)

The practical consequence: you cannot use NestJS with standard decorators today. NestJS’s entire DI system depends on emitDecoratorMetadata, which is incompatible with the standard system. NestJS 11 (2025) is working toward standard decorator support, but the ecosystem migration is ongoing.

// tsconfig.json for NestJS — legacy system required
{
    "compilerOptions": {
        "experimentalDecorators": true,
        "emitDecoratorMetadata": true,
        // ...
    }
}

// tsconfig.json for a project NOT using NestJS, using standard decorators
{
    "compilerOptions": {
        // No experimentalDecorators needed for standard TC39 decorators in TS 5.0+
        // ...
    }
}

When you start a new NestJS project with nest new, the CLI sets these flags automatically. When you encounter an article or library using decorators, check which system it targets — the two are not interchangeable.


Key Differences

ConceptC# AttributesTypeScript Decorators (Legacy)
Execution timeNever — metadata stored, not executedAt class definition (module load) time
What they can doStore metadata onlyModify the target, store metadata, or both
Type safetyAttributeUsage enforced by compilerNo compile-time enforcement of where decorators can be applied
CompositionApply multiple attributes, no built-in compositionComposed with applyDecorators() (NestJS) or manual stacking
Access to type infoFull reflection at any timeOnly available via emitDecoratorMetadata + reflect-metadata
Order of executionNo order (reflection reads all at once)Bottom-to-top for stacked decorators, outer factory first
DI integrationFramework reads constructor parameter types via reflectiondesign:paramtypes emitted by tsc, read by DI container at bootstrap
Modification of targetCannot modify the decorated memberCan replace method implementations, wrap constructors
Runtime overheadReflection cost at read timeNone at call time — modification applied at load time
StandardizationLanguage-level feature since C# 1.0Legacy system non-standard; standard system in TS 5.0+

Gotchas for .NET Engineers

Gotcha 1: Decorator Execution Order Is Bottom-to-Top, and Decorator Factories Run Top-to-Bottom

When you stack multiple decorators, C# attributes have no defined execution order — the framework reads them in whatever order GetCustomAttributes() returns. TypeScript decorators have a specific order that will surprise you.

For stacked decorators, factories (the outer function calls) execute top-to-bottom, but the actual decorator functions (the returned inner functions) execute bottom-to-top:

function First() {
    console.log('First factory called');          // 1st
    return function (target: object, key: string, desc: PropertyDescriptor) {
        console.log('First decorator applied');   // 4th
    };
}

function Second() {
    console.log('Second factory called');         // 2nd
    return function (target: object, key: string, desc: PropertyDescriptor) {
        console.log('Second decorator applied');  // 3rd
    };
}

class Example {
    @First()
    @Second()
    method() {}
}

// Output:
// First factory called
// Second factory called
// Second decorator applied  ← inner functions run bottom-to-top
// First decorator applied

This matters when decorators wrap a method. The last decorator applied (bottom) wraps the original implementation first. The first decorator (top) wraps the already-wrapped version. The outer wrapper executes first at call time. In .NET, you think about filter order — here you think about wrapping order.

Gotcha 2: emitDecoratorMetadata Erases Types You Expect to Be Available

When emitDecoratorMetadata: true is set, TypeScript emits type information for constructor parameters — but only for parameters whose types resolve to something concrete at runtime. Generic types, union types, and interface types do not survive.

// This works — UserRepository is a class, so its constructor function is emitted.
@Injectable()
class UserService {
    constructor(private readonly repo: UserRepository) {}
}

// This SILENTLY FAILS — interfaces do not exist at runtime.
// The emitted paramtype will be Object, not IUserRepository.
// NestJS cannot resolve IUserRepository from the DI container.
@Injectable()
class UserService {
    constructor(private readonly repo: IUserRepository) {} // ← runtime: Object
}

// Fix: use injection tokens explicitly.
import { Inject } from '@nestjs/common';

export const USER_REPOSITORY = Symbol('USER_REPOSITORY');

@Injectable()
class UserService {
    constructor(
        @Inject(USER_REPOSITORY) private readonly repo: IUserRepository,
    ) {}
}

This trips up .NET engineers who are accustomed to programming against interfaces and having DI resolve them automatically. In NestJS, DI works against class types (whose constructor functions are real runtime values) or against explicit tokens. If you inject an interface type without @Inject(), NestJS will resolve whatever happens to be registered under the key Object — which is almost certainly wrong, and the error will not be obvious.

Gotcha 3: Decorator Side Effects Run at Import Time, Not at Request Time

In ASP.NET Core, your attribute instances are created on demand when the framework needs them — typically during startup reflection or at the point of a specific request (for some filters). In TypeScript, your decorator functions run the moment the module is imported.

This means any code inside a decorator that has side effects — database connections, HTTP calls, filesystem access — runs at module load time, before your application is “ready,” and before any dependency injection is available.

// This runs when the module is imported. Not when the class is instantiated.
// Not when the method is called. At import time.
function CacheResult(ttlSeconds: number) {
    // If you try to access a DI container here, it does not exist yet.
    const cache = new Map<string, unknown>(); // Fine — in-memory
    // const cache = redis.connect(); // This would fail at import time

    return function (target: object, key: string, descriptor: PropertyDescriptor) {
        const original = descriptor.value as (...args: unknown[]) => Promise<unknown>;
        descriptor.value = async function (...args: unknown[]) {
            const cacheKey = JSON.stringify(args);
            if (cache.has(cacheKey)) return cache.get(cacheKey);
            const result = await original.apply(this, args);
            cache.set(cacheKey, result);
            setTimeout(() => cache.delete(cacheKey), ttlSeconds * 1000);
            return result;
        };
    };
}

The pattern for decorators that need services (like a logger, a cache client, or a repository) is to store a reference or a token in metadata at decoration time, then resolve the dependency at call time when the DI container is available. NestJS’s @Inject() follows exactly this pattern — it stores the injection token in metadata during decoration, and the framework resolves the actual instance when constructing the controller.

Gotcha 4: Circular Imports Break Decorator Metadata Silently

This is a Node.js module system issue that manifests specifically in decorator-heavy code. When module A decorates a class using a token defined in module B, and module B imports from module A (a circular dependency), one of the imports will resolve to undefined at the point the decorator runs — because the other module has not finished loading yet.

In .NET, the assembly linker resolves all type references before any code runs. In Node.js, module loading is sequential and circular references can produce undefined at import time.

// users.module.ts — imports auth.module.ts
// auth.module.ts — imports users.module.ts
// Circular. One of them will import undefined from the other.

// Symptom in NestJS: "Nest can't resolve dependencies of UserService (?).
// Please make sure that the argument AuthService at index [0] is available."

// Fix: use forwardRef() to defer the reference resolution.
import { forwardRef } from '@nestjs/common';

@Module({
    imports: [forwardRef(() => AuthModule)],
    providers: [UserService],
})
export class UsersModule {}

If you see NestJS DI resolution errors that mention ? as a dependency, or errors about circular dependencies, look for circular imports in your module graph before assuming the decorator configuration is wrong.

Gotcha 5: Decorators Applied to Abstract or Base Classes Do Not Automatically Apply to Subclasses

In C#, attributes applied to a virtual method in a base class are visible when reflecting on the override in a subclass (with inherit: true in GetCustomAttributes). TypeScript decorator behavior is different — decorators are applied to the specific class they appear on. A decorator on BaseController does not automatically apply to UsersController extends BaseController.

// C# behavior you might expect:
// [Authorize] on BaseController protects all subclass routes via inheritance.

// TypeScript: @UseGuards(AuthGuard) on BaseController does NOT protect subclasses.
// You must apply it to each subclass or use a global guard.

// Wrong assumption:
class BaseController {
    @UseGuards(AuthGuard)    // Only protects methods directly on BaseController
    protected doSomething() {}
}

class UsersController extends BaseController {
    @Get()
    getUsers() {}  // NOT protected by AuthGuard
}

// Correct approach in NestJS: global guard or explicit guard on each controller.
app.useGlobalGuards(new AuthGuard());

Hands-On Exercise

Build a complete custom validation system using decorators and reflect-metadata that mirrors what class-validator does internally. This exercise teaches you the mechanics that underpin NestJS’s validation pipes.

Setup:

mkdir decorator-exercise && cd decorator-exercise
npm init -y
npm install reflect-metadata typescript ts-node
npx tsc --init

Update tsconfig.json:

{
    "compilerOptions": {
        "target": "ES2020",
        "experimentalDecorators": true,
        "emitDecoratorMetadata": true,
        "strict": true
    }
}

Step 1 — Build the validation decorators:

Create src/validators.ts:

import 'reflect-metadata';

const VALIDATORS_KEY = 'validators';

type ValidatorFn = (value: unknown, key: string) => string | null;

function addValidator(target: object, propertyKey: string, validator: ValidatorFn) {
    const existing: Map<string, ValidatorFn[]> =
        Reflect.getMetadata(VALIDATORS_KEY, target) ?? new Map();

    const forKey = existing.get(propertyKey) ?? [];
    forKey.push(validator);
    existing.set(propertyKey, forKey);

    Reflect.defineMetadata(VALIDATORS_KEY, existing, target);
}

export function IsString() {
    return function (target: object, propertyKey: string) {
        addValidator(target, propertyKey, (value, key) =>
            typeof value !== 'string' ? `${key} must be a string` : null,
        );
    };
}

export function MinLength(min: number) {
    return function (target: object, propertyKey: string) {
        addValidator(target, propertyKey, (value, key) =>
            typeof value === 'string' && value.length < min
                ? `${key} must be at least ${min} characters`
                : null,
        );
    };
}

export function IsEmail() {
    return function (target: object, propertyKey: string) {
        addValidator(target, propertyKey, (value, key) => {
            const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
            return typeof value === 'string' && !emailRegex.test(value)
                ? `${key} must be a valid email address`
                : null;
        });
    };
}

export function validate(instance: object): string[] {
    const validators: Map<string, ValidatorFn[]> =
        Reflect.getMetadata(VALIDATORS_KEY, instance) ?? new Map();

    const errors: string[] = [];

    for (const [key, fns] of validators) {
        const value = (instance as Record<string, unknown>)[key];
        for (const fn of fns) {
            const error = fn(value, key);
            if (error) errors.push(error);
        }
    }

    return errors;
}

Step 2 — Apply the decorators to a DTO:

Create src/create-user.dto.ts:

import { IsString, MinLength, IsEmail } from './validators';

export class CreateUserDto {
    @IsString()
    @MinLength(2)
    name: string = '';

    @IsEmail()
    email: string = '';

    @IsString()
    @MinLength(8)
    password: string = '';
}

Step 3 — Validate instances:

Create src/index.ts:

import { CreateUserDto } from './create-user.dto';
import { validate } from './validators';

const validDto = new CreateUserDto();
validDto.name = 'Alice';
validDto.email = 'alice@example.com';
validDto.password = 'securepassword';

const invalidDto = new CreateUserDto();
invalidDto.name = 'A';
invalidDto.email = 'not-an-email';
invalidDto.password = 'short';

console.log('Valid DTO errors:', validate(validDto));
// []

console.log('Invalid DTO errors:', validate(invalidDto));
// [
//   'name must be at least 2 characters',
//   'email must be a valid email address',
//   'password must be at least 8 characters'
// ]

Run it:

npx ts-node src/index.ts

Extension tasks:

  1. Add an @IsOptional() decorator that marks a field as skippable when undefined. Validation should not run on optional fields with undefined values.
  2. Add an @IsNumber() decorator. Use emitDecoratorMetadata and Reflect.getMetadata('design:type', target, propertyKey) to infer the type without an explicit decorator — see if you can validate numeric fields automatically based on their TypeScript type.
  3. Implement a @ValidateNested() decorator that runs validate() recursively on nested DTO instances, building a nested error structure. Compare this to how class-validator’s @ValidateNested() and @Type() work.

Quick Reference

C# ConceptTypeScript / NestJS EquivalentKey Difference
[HttpGet("/users")]@Get('/users')Decorator runs at load time; attribute is lazy metadata
[ApiController]@Controller('users')Same intent; TS stores metadata, ASP reads it
[Authorize(Roles = "Admin")]@UseGuards(AdminGuard)TS guard is class-based; attribute is data-driven
[FromBody] CreateUserDto dto@Body() dto: CreateUserDtoBoth bind request body; param decorator stores index
[FromRoute] int id@Param('id', ParseIntPipe) id: numberPipes transform; pipe replaces model binding
[Service] / services.AddScoped<T>()@Injectable() + register in @Module providersDI auto-wires via design:paramtypes metadata
IServiceCollection.AddScoped<I, T>(){ provide: InjectionToken, useClass: T }Interfaces not real at runtime; use tokens
Custom Attribute subclassDecorator factory functionTS decorator is a function; attribute is a class
GetCustomAttribute<T>(method)Reflect.getMetadata(key, target, method)Both read stored metadata; TS needs explicit import
AttributeUsage(AttributeTargets.Method)No built-in enforcementTypeScript does not restrict decorator targets at compile time
emitDecoratorMetadata"emitDecoratorMetadata": true in tsconfigEmits design:paramtypes — required for NestJS DI
applyDecorators(A, B)[A, B] stacked on a methodNestJS utility to compose multiple decorators into one
forwardRef<T>()forwardRef(() => T)Breaks circular module dependencies in NestJS
experimentalDecorators flag"experimentalDecorators": trueLegacy system; required for NestJS, TypeORM, class-validator
TC39 Stage 3 decoratorsTS 5.0+ without the flagStandard system; not compatible with NestJS currently

Further Reading

  • TypeScript Decorators — TypeScript Handbook — The authoritative reference for the legacy decorator system. Covers class, method, property, and parameter decorators with examples.
  • NestJS Custom Decorators — Official NestJS documentation for createParamDecorator, applyDecorators, and composing decorators in the NestJS DI context.
  • reflect-metadata — GitHub — The library that implements the Metadata Reflection API. The README explains the proposal and the API design. Useful background for understanding why NestJS’s DI behaves the way it does.
  • TC39 Decorators Proposal — The Stage 3 proposal repository. Includes the motivation document explaining what changed from the legacy system and why.

Utility Types & Type Patterns for .NET Engineers

For .NET engineers who know: C# generics, DTOs, IReadOnlyList<T>, discriminated unions via class hierarchies, and the Result/Option patterns from functional programming extensions You’ll learn: The TypeScript utility types and type-level patterns that replace common C# patterns — when they’re strictly better, when they’re a trade-off, and when to reach for them in practice Time: 15-20 min read

TypeScript ships a library of built-in generic utility types that transform and compose other types at the type level. If you’ve spent time with C# generics and interfaces, you can read Partial<T> and understand the intent immediately. But the patterns built on top of these utilities — branded types, discriminated unions, the Result pattern — depart from the C# idiom in ways that will surprise you. This article maps every pattern to its C# equivalent and tells you honestly when the TypeScript approach wins and when it costs you something.


The .NET Way (What You Already Know)

In C#, type manipulation is done primarily through interfaces, generics, and class hierarchies. When you want to express “a User with only some fields required,” you create a separate DTO:

// C# — a separate class for each shape you need
public record CreateUserDto(string Name, string Email, string? PhoneNumber);
public record UpdateUserDto(string? Name, string? Email, string? PhoneNumber);
public record UserSummaryDto(int Id, string Name);
public record UserDetailDto(int Id, string Name, string Email, string? PhoneNumber, DateTime CreatedAt);

This is explicit and readable. The downside: every time User gains a new field, you revisit each DTO. Renaming a field means a rename across four files. The entity and its DTOs drift apart silently — the compiler won’t catch a DTO that’s missing a new required field if the property just didn’t exist yet.

For collections, you use IReadOnlyList<T>, IReadOnlyDictionary<K,V>, or ImmutableList<T>. For nominal typing — making UserId distinct from OrderId even though both are int — you use wrapper types or value objects. For state machines, you use class hierarchies with a common base and pattern matching (switch (state) { case SuccessState s: ... }).

TypeScript gives you different tools for the same problems. Some are strictly better. Some are more powerful but more dangerous. None of them require creating a new class.


The TypeScript Way

Partial<T> — PATCH Operations Without a Separate UpdateDto

Partial<T> takes a type and makes every property optional. For PATCH endpoints, this is the direct replacement for a hand-maintained UpdateUserDto:

// TypeScript — derive the update shape from the base type
interface User {
  id: number;
  name: string;
  email: string;
  phoneNumber: string | null;
  createdAt: Date;
}

// Partial<User> is equivalent to:
// { id?: number; name?: string; email?: string; phoneNumber?: string | null; createdAt?: Date }
type UpdateUserDto = Partial<Omit<User, 'id' | 'createdAt'>>;

// NestJS controller — the body is typed, validated at runtime separately via Zod
@Patch(':id')
async updateUser(
  @Param('id') id: number,
  @Body() dto: UpdateUserDto,
): Promise<User> {
  return this.userService.update(id, dto);
}
// C# equivalent — manual, must be maintained separately from User
public record UpdateUserDto(
    string? Name,
    string? Email,
    string? PhoneNumber
);

The TypeScript version stays synchronized with User automatically. Add avatarUrl: string to User and UpdateUserDto gets it for free. In C#, you’d add it to UpdateUserDto manually or miss it.

The trade-off: Partial<T> makes ALL properties optional, including ones that might not make sense to update. Omit compensates for read-only fields like id and createdAt. For complex cases — some fields required, some optional — you combine utilities:

// Required name, optional everything else
type UpdateUserDto = Required<Pick<User, 'name'>> & Partial<Omit<User, 'id' | 'createdAt' | 'name'>>;

This is where TypeScript’s power becomes its own problem — complex composed types lose readability fast. If a type expression doesn’t fit in a single line of reasonable length, consider whether a named intermediate type would be clearer.

Pick<T, K> and Omit<T, K> — View Models Without Ceremony

Pick<T, K> creates a new type with only the specified keys. Omit<T, K> creates a type with the specified keys removed. Both replace the C# pattern of writing separate DTO classes for different views of the same entity.

// Instead of writing UserSummaryDto from scratch:
type UserSummary = Pick<User, 'id' | 'name'>;

// Instead of writing UserDetailDto that might miss new fields:
type UserDetail = Omit<User, 'createdAt'>;

// Combine with Partial for patch-safe view models:
type UserProfile = Pick<User, 'id' | 'name' | 'email'>;
// C# — must be kept in sync manually
public record UserSummary(int Id, string Name);
public record UserDetail(int Id, string Name, string Email, string? PhoneNumber);

When the TypeScript approach is better: Your entity has 15 fields and you need 6 different view shapes. Creating 6 hand-maintained DTOs in C# is noise. Six Pick and Omit expressions that derive from the source type stay correct automatically.

When the TypeScript approach is worse: Serialization and OpenAPI documentation. In C#, your DTO classes carry [JsonPropertyName], [Required], and Swagger annotations. TypeScript utility types produce anonymous structural types — they don’t carry NestJS Swagger decorators. If you need @ApiProperty() decorators on every field for @nestjs/swagger, you still need a class, not a Pick alias. This is the primary reason you’ll see NestJS projects using class-validator DTO classes instead of derived utility types.

The practical rule: use utility types in application logic and internal service interfaces. Use DTO classes decorated for validation and OpenAPI where the contract matters.

Record<K, V> — Typed Dictionaries

Record<K, V> creates an object type with keys of type K and values of type V. It’s the TypeScript equivalent of Dictionary<K, V> or IReadOnlyDictionary<K, V>.

// .NET: Dictionary<string, number>
// TypeScript: Record<string, number>

// Index a lookup table by role
type RolePermissions = Record<string, string[]>;
const permissions: RolePermissions = {
  admin: ['read', 'write', 'delete'],
  viewer: ['read'],
};

// K can be a union type — this constrains valid keys at compile time
type HttpStatusText = Record<200 | 201 | 400 | 401 | 404 | 500, string>;
const statusText: HttpStatusText = {
  200: 'OK',
  201: 'Created',
  400: 'Bad Request',
  401: 'Unauthorized',
  404: 'Not Found',
  500: 'Internal Server Error',
};
// TypeScript will error if you add a key not in the union, or miss one
// C# equivalent
Dictionary<int, string> statusText = new() {
    { 200, "OK" },
    { 201, "Created" },
    // No compiler error if you forget a key
};

The union key variant of Record is strictly better than C#’s Dictionary for exhaustive mappings — the compiler verifies you’ve handled every case. Dictionary<int, string> has no equivalent constraint.

readonly Arrays and Objects — Immutability Without ImmutableList<T>

TypeScript’s readonly modifier and the Readonly<T> utility type create immutable views without allocating new data structures, unlike ImmutableList<T>.

// Readonly array — prevents mutation methods (push, pop, splice)
function processItems(items: readonly string[]): string[] {
  // items.push('x'); // Error: Property 'push' does not exist on type 'readonly string[]'
  return items.map(item => item.toUpperCase()); // Fine — returns new array
}

// Readonly object — prevents property reassignment
function displayUser(user: Readonly<User>): string {
  // user.name = 'hacked'; // Error: Cannot assign to 'name' because it is a read-only property
  return `${user.name} <${user.email}>`;
}

// ReadonlyArray<T> is the explicit form
const config: ReadonlyArray<string> = ['a', 'b', 'c'];
// C# equivalents — require separate interface or allocation
IReadOnlyList<string> items = new List<string> { "a", "b", "c" };
// ImmutableList<string> alloc = ImmutableList.Create("a", "b", "c");

The TypeScript readonly modifier is zero-cost at runtime — it only exists in the type system. ImmutableList<T> creates a new data structure. For function signatures where you want to communicate “I won’t mutate this,” readonly arrays are cleaner and cheaper.

Branded Types — Nominal Typing for Structural Types

This is the most important pattern in this article for engineers coming from C#.

TypeScript’s type system is structural: if two types have the same shape, they’re interchangeable. UserId (a string) and OrderId (a string) are the same type to TypeScript. Passing an OrderId where a UserId is expected compiles and runs silently.

// Without branding — this compiles. It should not.
type UserId = string;
type OrderId = string;

function getUser(id: UserId): Promise<User> { /* ... */ }

const orderId: OrderId = 'order-123';
getUser(orderId); // No error. You just passed an OrderId to a function expecting UserId.

Branded types add a phantom property that makes structurally identical types nominally distinct:

// Brand type utility
type Brand<T, B extends string> = T & { readonly __brand: B };

// Branded primitive types — same runtime value, distinct compile-time types
type UserId = Brand<string, 'UserId'>;
type OrderId = Brand<string, 'OrderId'>;
type ProductId = Brand<string, 'ProductId'>;

// Constructor functions that create branded values
function toUserId(id: string): UserId {
  return id as UserId;
}

function toOrderId(id: string): OrderId {
  return id as OrderId;
}

// Now the compiler distinguishes them
function getUser(id: UserId): Promise<User> { /* ... */ }

const userId = toUserId('user-456');
const orderId = toOrderId('order-123');

getUser(userId);  // Fine
getUser(orderId); // Error: Argument of type 'OrderId' is not assignable to parameter of type 'UserId'
// C# equivalent — a wrapper record or struct
public readonly record struct UserId(string Value);
public readonly record struct OrderId(string Value);

// Explicit, zero-ambiguity
public Task<User> GetUser(UserId id) { /* ... */ }

Judgment: The C# approach — a wrapper record — is cleaner and carries intent more clearly. The TypeScript branded type is a workaround for structural typing’s lack of nominal safety. Use branded types at system boundaries (API IDs, external identifiers) where mixing up types has real consequences. Don’t brand everything — the as UserId cast in the constructor is a trust boundary, and over-branding creates cast-heavy code that’s harder to read than it’s worth.

A common application: validating IDs at the Prisma or API layer and returning branded types so that downstream code can’t accidentally swap them.

The Result Pattern — Explicit Error Handling

TypeScript has no checked exceptions (neither does C# 8+, though some teams use OneOf or functional extensions). The Result<T, E> pattern makes error paths explicit in the type signature, similar to Rust’s Result or F#’s Result.

// Simple discriminated union Result type
type Result<T, E extends Error = Error> =
  | { success: true; data: T }
  | { success: false; error: E };

// Usage — the caller must handle both branches
async function createUser(dto: CreateUserDto): Promise<Result<User, ValidationError | DbError>> {
  const validation = validateUser(dto);
  if (!validation.ok) {
    return { success: false, error: new ValidationError(validation.message) };
  }

  try {
    const user = await db.user.create({ data: dto });
    return { success: true, data: user };
  } catch (err) {
    return { success: false, error: new DbError('User creation failed', { cause: err }) };
  }
}

// Caller is forced to handle the error path
const result = await createUser(dto);
if (!result.success) {
  // result.error is typed as ValidationError | DbError here
  logger.error(result.error.message);
  throw result.error;
}
// result.data is User here — TypeScript narrowed the type
const user = result.data;
// C# equivalent — throw/catch or OneOf
// OneOf pattern (library: OneOf)
OneOf<User, ValidationError, DbError> CreateUser(CreateUserDto dto) { /* ... */ }

// Or the simpler throw approach
User CreateUser(CreateUserDto dto) {
    // throws ValidationException or DbException
}

Judgment: The Result pattern is valuable at service layer boundaries where callers need to handle errors meaningfully rather than catching broadly. It’s poor for internal helper functions where you’d rather throw and let a global error handler deal with it. The NestJS convention — throwing HttpException subclasses in controllers and services, caught by exception filters — is simpler for API-facing code. Reserve Result<T, E> for business logic that must distinguish between multiple failure modes.

For teams that want a fuller implementation, the neverthrow library provides a Result type with monadic chaining (map, andThen, mapErr).

Discriminated Unions for State Machines

This is where TypeScript genuinely outperforms C#. Discriminated unions model state machines without class hierarchies, base classes, or pattern matching boilerplate.

// State machine for an order — each state carries its own data
type OrderState =
  | { status: 'pending'; createdAt: Date }
  | { status: 'processing'; startedAt: Date; processorId: string }
  | { status: 'shipped'; shippedAt: Date; trackingNumber: string }
  | { status: 'delivered'; deliveredAt: Date }
  | { status: 'cancelled'; cancelledAt: Date; reason: string };

type Order = {
  id: OrderId;
  customerId: UserId;
  items: OrderItem[];
  state: OrderState;
};

// TypeScript narrows the type based on the discriminant property
function describeOrder(order: Order): string {
  switch (order.state.status) {
    case 'pending':
      return `Order pending since ${order.state.createdAt.toISOString()}`;
    case 'shipped':
      // order.state.trackingNumber is available here — TypeScript knows the shape
      return `Shipped: ${order.state.trackingNumber}`;
    case 'cancelled':
      return `Cancelled: ${order.state.reason}`;
    default:
      return order.state.status;
  }
}

// Adding a new status 'refunded' will cause TypeScript to warn about
// non-exhaustive switch if you use a 'never' assertion:
function exhaustiveCheck(x: never): never {
  throw new Error(`Unhandled case: ${JSON.stringify(x)}`);
}

switch (order.state.status) {
  // ... cases ...
  default:
    return exhaustiveCheck(order.state); // Error if any status is unhandled
}
// C# equivalent — abstract base class + derived types + pattern matching
public abstract record OrderState;
public record PendingState(DateTime CreatedAt) : OrderState;
public record ProcessingState(DateTime StartedAt, string ProcessorId) : OrderState;
public record ShippedState(DateTime ShippedAt, string TrackingNumber) : OrderState;
public record DeliveredState(DateTime DeliveredAt) : OrderState;
public record CancelledState(DateTime CancelledAt, string Reason) : OrderState;

string DescribeOrder(Order order) => order.State switch {
    PendingState s => $"Order pending since {s.CreatedAt:O}",
    ShippedState s => $"Shipped: {s.TrackingNumber}",
    CancelledState s => $"Cancelled: {s.Reason}",
    _ => "Unknown state"
};

The C# version requires four more files (the derived records), a base class, and the switch expression doesn’t give you a compiler error if you miss a case — it falls through to _. TypeScript’s discriminated union is more compact, and the exhaustiveCheck trick gives you compile-time exhaustiveness checking without C#’s default: throw idiom.

Judgment: Discriminated unions are a clear TypeScript win for state machines, event types in event-sourced systems, and API response shapes with multiple variants. The C# class hierarchy approach carries more ceremony for the same expressiveness.

Builder Pattern in TypeScript

In C#, builders create fluent APIs for constructing complex objects. TypeScript achieves this with method chaining, but you have two additional tools: the satisfies operator and type narrowing through state.

// Simple builder with type safety
class QueryBuilder<T extends Record<string, unknown>> {
  private filters: Partial<T> = {};
  private sortField?: keyof T;
  private limitCount?: number;

  where(field: keyof T, value: T[typeof field]): this {
    this.filters[field] = value;
    return this;
  }

  orderBy(field: keyof T): this {
    this.sortField = field;
    return this;
  }

  limit(n: number): this {
    this.limitCount = n;
    return this;
  }

  build(): { filters: Partial<T>; sort?: keyof T; limit?: number } {
    return {
      filters: this.filters,
      sort: this.sortField,
      limit: this.limitCount,
    };
  }
}

// Usage — type-safe against User's fields
const query = new QueryBuilder<User>()
  .where('email', 'alice@example.com')
  .orderBy('createdAt')
  .limit(10)
  .build();

// .where('nonexistent', 'value') would error — 'nonexistent' is not keyof User

The satisfies operator (TypeScript 4.9+) is useful for builder objects that need to conform to a type without losing their specific literal types:

const config = {
  host: 'localhost',
  port: 5432,
  database: 'mydb',
} satisfies DbConfig;
// config.host is typed as 'localhost' (literal), not string
// But TypeScript still verifies the shape matches DbConfig
// C# builder — essentially the same pattern
var query = new QueryBuilder<User>()
    .Where(u => u.Email == "alice@example.com")
    .OrderBy(u => u.CreatedAt)
    .Take(10)
    .Build();

The TypeScript version is comparable to C# for simple builders. The difference is that TypeScript builders must return this to enable chaining, and the lack of extension methods means fluent APIs must be built into the class rather than added externally.


Key Differences

PatternC# ApproachTypeScript ApproachTS Better WhenTS Worse When
Partial DTOSeparate UpdateDto classPartial<T>Many view shapes of one entityNeed Swagger decorators on each field
View modelSeparate SummaryDto classPick<T, K> / Omit<T, K>Auto-sync with source typeNeed validation attributes per field
DictionaryDictionary<K, V>Record<K, V>Exhaustive union keysRuntime performance (same either way)
ImmutabilityIReadOnlyList<T>, ImmutableList<T>readonly T[], Readonly<T>Zero-cost (type-only)Runtime enforcement — TS gives compile-time only
Nominal typingWrapper records / value objectsBranded typesNo new class neededC# wrappers carry serialization and EF mapping naturally
Error handlingExceptions, OneOf, or FluentResultsResult<T, E> discriminated unionExplicit multi-error-type pathsSimple CRUD — exceptions are simpler
State machineAbstract class hierarchyDiscriminated unionCompact, exhaustiveness checkExternal serialization needs class names
BuilderFluent builder classesMethod-chaining class + satisfiesSame as C# for most casesExtension method-style builders (no equivalent)

Gotchas for .NET Engineers

1. Utility Types Are Compile-Time Only — No Runtime Validation

Partial<User>, Readonly<User>, Pick<User, 'id' | 'name'> — none of these do anything at runtime. They are erased by the TypeScript compiler. If you receive JSON from an API and assign it to a Partial<User>, TypeScript believes you but the actual runtime object could be anything.

// This compiles. At runtime, apiResponse could have any shape.
const dto = apiResponse as Partial<User>;
dto.name; // Could be undefined, could be a number, could not exist at all

This is different from C#, where a Partial<User> equivalent (an UpdateDto class) is a real class — the JSON deserializer validates shape, missing properties stay null or default, extra properties are ignored.

The fix is Zod for runtime validation. Every external boundary (API request bodies, external API responses) needs a Zod schema that validates at runtime. Don’t confuse TypeScript’s utility types with runtime safety — they’re orthogonal.

// Correct pattern
const updateUserSchema = z.object({
  name: z.string().optional(),
  email: z.string().email().optional(),
});

type UpdateUserDto = z.infer<typeof updateUserSchema>; // Derives the type from Zod
// Now updateUserSchema.parse(body) validates at runtime AND gives you the type

2. Readonly<T> Does Not Prevent Deep Mutation

Readonly<T> makes the direct properties of T read-only, but it is not deep. Nested objects within a Readonly<T> are still mutable.

type ReadonlyUser = Readonly<User>;

const user: ReadonlyUser = { id: 1, name: 'Alice', address: { city: 'Boston' } };

// This errors — direct property
// user.name = 'Bob'; // Error: Cannot assign to 'name' because it is a read-only property

// This does NOT error — nested property
user.address.city = 'NYC'; // Compiles fine. address itself is readonly, but city is not.

For deep immutability, use as const for static data, or a library like immer for mutable-style updates on truly immutable data. Do not rely on Readonly<T> for deep immutability the way you might rely on ImmutableList<T> in C#.

3. Discriminated Union Exhaustiveness Requires a Workaround

In C# 9+ switch expressions, the compiler tells you when a pattern match is non-exhaustive. TypeScript does not do this natively — a switch on a discriminated union’s status field with a missing case compiles without error and silently falls through.

The exhaustiveCheck(x: never) trick forces exhaustiveness:

function handleState(state: OrderState): string {
  switch (state.status) {
    case 'pending': return 'Pending';
    case 'shipped': return 'Shipped';
    // Missing 'processing', 'delivered', 'cancelled'
    default:
      // Without this, the missing cases silently return undefined
      throw new Error(`Unhandled state: ${(state as any).status}`);
  }
}

// With exhaustiveness check:
function handleStateExhaustive(state: OrderState): string {
  switch (state.status) {
    case 'pending': return 'Pending';
    case 'shipped': return 'Shipped';
    default:
      // TypeScript errors here because state can still be 'processing' | 'delivered' | 'cancelled'
      // — which are not assignable to 'never'
      return exhaustiveCheck(state); // compile error until all cases are handled
  }
}

function exhaustiveCheck(x: never): never {
  throw new Error(`Exhaustive check failed: ${JSON.stringify(x)}`);
}

This is opt-in and requires discipline. Enable the @typescript-eslint/switch-exhaustiveness-check ESLint rule (see Article 2.7) to catch this automatically.

4. Complex Composed Types Hurt Readability

The ability to compose Partial, Pick, Omit, Required, and Readonly is powerful, but complex compositions become unreadable quickly:

// This is a type expression, not documentation
type T = Required<Pick<Omit<Partial<User>, 'id'>, 'name' | 'email'>> & Readonly<Pick<User, 'id'>>;

When a type expression becomes complex enough that you have to pause to parse it, extract named intermediate types. TypeScript’s type inference propagates through names — you lose nothing by naming intermediate types:

type UserUpdateFields = Omit<User, 'id' | 'createdAt'>;
type PartialUserUpdate = Partial<UserUpdateFields>;
type RequiredName = Required<Pick<PartialUserUpdate, 'name'>>;
type PatchUserDto = RequiredName & Omit<PartialUserUpdate, 'name'>;

In C# you’d just write the class. In TypeScript, composition gives you derived types — but name the intermediate steps.

5. Branded Types Require Cast at Construction — Don’t Skip Validation

The as UserId cast in a branded type constructor is a promise that the string is a valid UserId. If you cast without validating, you’ve defeated the purpose:

// Wrong — cast without validation
function toUserId(id: string): UserId {
  return id as UserId; // You're just trusting the caller — no validation
}

// Better — validate before branding
function toUserId(raw: string): UserId {
  if (!raw.startsWith('usr_') || raw.length < 10) {
    throw new ValidationError(`Invalid UserId format: ${raw}`);
  }
  return raw as UserId;
}

// Best for API boundaries — use Zod with refinements
const UserIdSchema = z.string()
  .startsWith('usr_')
  .min(10)
  .transform(val => val as UserId); // Brand after validation

The brand is only as trustworthy as the constructor. At API and database read points, validate the raw string against your format rules before casting. For identifiers coming from Prisma queries (which you trust), direct casting is fine — you know the database gives you valid IDs.


Hands-On Exercise

This exercise builds a typed state machine for a support ticket system using discriminated unions, branded types, and Result.

Create src/tickets/ticket.types.ts:

// 1. Brand the ticket ID
type Brand<T, B extends string> = T & { readonly __brand: B };
export type TicketId = Brand<string, 'TicketId'>;
export type AgentId = Brand<string, 'AgentId'>;

export function toTicketId(id: string): TicketId {
  return id as TicketId;
}

// 2. Discriminated union state machine
export type TicketState =
  | { status: 'open'; createdAt: Date }
  | { status: 'assigned'; createdAt: Date; assignedTo: AgentId; assignedAt: Date }
  | { status: 'in_progress'; assignedTo: AgentId; startedAt: Date }
  | { status: 'resolved'; resolvedAt: Date; resolution: string }
  | { status: 'closed'; closedAt: Date };

export interface Ticket {
  id: TicketId;
  title: string;
  description: string;
  state: TicketState;
}

// 3. Result type for transitions
type TransitionError = { code: 'INVALID_TRANSITION'; from: string; to: string };
type Result<T, E> = { success: true; data: T } | { success: false; error: E };

// 4. Exhaustiveness check helper
function exhaustiveCheck(x: never): never {
  throw new Error(`Unhandled state: ${JSON.stringify(x)}`);
}

// 5. State transition function — only valid transitions allowed
export function assignTicket(
  ticket: Ticket,
  agentId: AgentId,
): Result<Ticket, TransitionError> {
  if (ticket.state.status !== 'open') {
    return {
      success: false,
      error: {
        code: 'INVALID_TRANSITION',
        from: ticket.state.status,
        to: 'assigned',
      },
    };
  }
  return {
    success: true,
    data: {
      ...ticket,
      state: {
        status: 'assigned',
        createdAt: ticket.state.createdAt,
        assignedTo: agentId,
        assignedAt: new Date(),
      },
    },
  };
}

// 6. Exhaustive display function — compiler catches missing cases
export function describeTicket(ticket: Ticket): string {
  const { state } = ticket;
  switch (state.status) {
    case 'open':
      return `Open since ${state.createdAt.toISOString()}`;
    case 'assigned':
      return `Assigned to ${state.assignedTo} at ${state.assignedAt.toISOString()}`;
    case 'in_progress':
      return `In progress by ${state.assignedTo}`;
    case 'resolved':
      return `Resolved: ${state.resolution}`;
    case 'closed':
      return `Closed at ${state.closedAt.toISOString()}`;
    default:
      return exhaustiveCheck(state); // Compile error if a status is unhandled
  }
}

// 7. Utility type derivations for API layer
export type CreateTicketDto = Pick<Ticket, 'title' | 'description'>;
export type TicketSummary = Pick<Ticket, 'id' | 'title'> & { status: TicketState['status'] };

After writing this, try:

  1. Adding a new 'reopened' status to TicketState and observe where the compiler flags the missing case
  2. Passing an AgentId where a TicketId is expected and verify the error
  3. Calling assignTicket on an in_progress ticket and handling the Result properly

Quick Reference

NeedTypeScriptC# Equivalent
Optional all fieldsPartial<T>New DTO with nullable properties
Select some fieldsPick<T, 'a' | 'b'>New DTO with selected properties
Exclude some fieldsOmit<T, 'id' | 'createdAt'>New DTO without those properties
Typed dictionaryRecord<K, V>Dictionary<K, V>
Readonly typeReadonly<T>IReadOnlyList<T> / IReadOnly*
Readonly arrayreadonly T[] or ReadonlyArray<T>IReadOnlyList<T>
Make all requiredRequired<T>No direct equivalent
Nominal ID typingBranded type: T & { __brand: B }Wrapper record / value object
Exhaustive stateDiscriminated union + exhaustiveCheckAbstract class + switch expression
Explicit errorsResult<T, E> discriminated unionOneOf / FluentResults / exceptions
Builder patternClass with this return methodsFluent builder class
Validated typeZod schema + z.infer<typeof schema>DTO class + Data Annotations
Runtime type checkinstanceof (classes) / in operator / Zodis T pattern matching

Common Compositions

// Partial update, excluding system fields
type PatchDto<T> = Partial<Omit<T, 'id' | 'createdAt' | 'updatedAt'>>;

// Required subset
type RequiredFields<T, K extends keyof T> = Required<Pick<T, K>> & Partial<Omit<T, K>>;

// Deep readonly (naive — does not handle arrays inside objects)
type DeepReadonly<T> = { readonly [K in keyof T]: DeepReadonly<T[K]> };

// Extract discriminant values from a union
type OrderStatus = OrderState['status']; // 'pending' | 'processing' | 'shipped' | 'delivered' | 'cancelled'

Further Reading

Linting & Formatting: StyleCop/EditorConfig vs. ESLint/Prettier

For .NET engineers who know: .editorconfig, Roslyn analyzers, StyleCop.Analyzers, and the code quality rules configured in .csproj and global.json You’ll learn: How ESLint and Prettier divide the responsibilities of linting and formatting in the TypeScript world, how to configure them, and how to enforce them in CI and pre-commit hooks Time: 10-15 min read

In the .NET ecosystem, code style enforcement comes from several cooperating systems: .editorconfig for indentation and whitespace, Roslyn analyzers for code quality rules, and optionally StyleCop for style conventions like file headers and member ordering. They are all enforced by the same compiler infrastructure — MSBuild picks them up automatically.

In the TypeScript world, the responsibility is split between two separate tools that do not overlap: ESLint catches logic errors, suspicious patterns, and code quality issues. Prettier handles all formatting — indentation, quotes, semicolons, line length. You configure them separately, run them separately, and they have no awareness of each other except through a bridge package that prevents conflicts. Understanding this split is the first thing to internalize.


The .NET Way (What You Already Know)

Roslyn analyzers run as part of compilation. The analyzer sees the full semantic model of your code — not just text, but the actual meaning of each construct. StyleCop.Analyzers adds rules like “public members must have XML documentation” and “using directives must appear within a namespace.” The .editorconfig file controls whitespace, indent size, and newline conventions. Warnings can be upgraded to errors via .csproj:

<!-- .csproj — treat analyzer warnings as errors -->
<PropertyGroup>
  <TreatWarningsAsErrors>true</TreatWarningsAsErrors>
  <AnalysisLevel>latest</AnalysisLevel>
  <EnableNETAnalyzers>true</EnableNETAnalyzers>
</PropertyGroup>
# .editorconfig — cross-language formatting rules
[*.cs]
indent_style = space
indent_size = 4
end_of_line = crlf
charset = utf-8-bom
dotnet_sort_system_directives_first = true
csharp_new_line_before_open_brace = all

The experience in Visual Studio is seamless: save a file, it auto-formats. Break a rule, you get a red squiggle. Push to CI, the build fails. You rarely think about the tooling itself.

TypeScript tooling requires more explicit configuration but gives you finer control over the rules.


The TypeScript Way

ESLint — The Linter

ESLint is the standard linter for JavaScript and TypeScript. It analyzes your code for patterns that are likely bugs, code quality issues, or violations of team conventions. ESLint does not care about whitespace or formatting — that’s Prettier’s domain.

ESLint uses a plugin system. The base rules cover JavaScript patterns. TypeScript-specific rules come from @typescript-eslint/eslint-plugin, which has access to TypeScript’s type information and can catch errors that the base rules cannot.

Flat Config Format (the Current Standard)

ESLint transitioned from the legacy .eslintrc.json format to a flat config (eslint.config.js or eslint.config.mjs) starting with ESLint v8 and making it the default in v9. All new projects should use flat config. If you encounter an .eslintrc.json in an existing codebase, it is using the legacy format — the syntax differs but the concepts are the same.

// eslint.config.mjs — flat config format (ESLint v9+)
import js from '@eslint/js';
import tseslint from 'typescript-eslint';
import prettier from 'eslint-config-prettier';

export default tseslint.config(
  // Base JavaScript recommended rules
  js.configs.recommended,

  // TypeScript-aware rules
  ...tseslint.configs.recommended,

  // Disable ESLint formatting rules that conflict with Prettier
  // This must be last so it overrides everything above
  prettier,

  {
    // Project-specific rule overrides
    rules: {
      // Prevent floating Promises — the most important async rule for .NET engineers
      '@typescript-eslint/no-floating-promises': 'error',

      // Enforce consistent use of 'type' imports (imported solely for types)
      '@typescript-eslint/consistent-type-imports': ['error', { prefer: 'type-imports' }],

      // Disallow 'any' — use 'unknown' instead and narrow
      '@typescript-eslint/no-explicit-any': 'warn',

      // Require explicit return types on public functions
      // (comment this out if it's too verbose for your taste)
      '@typescript-eslint/explicit-function-return-type': ['warn', {
        allowExpressions: true,
        allowTypedFunctionExpressions: true,
      }],

      // Prevent unused variables from silently accumulating
      '@typescript-eslint/no-unused-vars': ['error', {
        argsIgnorePattern: '^_',
        varsIgnorePattern: '^_',
      }],

      // Enforce exhaustive switch on discriminated unions
      '@typescript-eslint/switch-exhaustiveness-check': 'error',

      // No console.log in production code — use a logger
      'no-console': ['warn', { allow: ['warn', 'error'] }],
    },
  },

  {
    // Files to ignore — equivalent to .eslintignore
    ignores: ['dist/', 'node_modules/', '.next/', 'coverage/'],
  },
);

Install the required packages:

pnpm add -D eslint @eslint/js typescript-eslint eslint-config-prettier

Why @typescript-eslint/no-floating-promises Is Non-Negotiable

This rule deserves special mention because it catches the most dangerous class of bug for engineers moving from C# to TypeScript. In C#, if you call an async method without await, the compiler warns you. In TypeScript, without this rule, forgetting await is silent:

// Without the lint rule, this compiles and runs — and does nothing observable
async function deleteUser(id: UserId): Promise<void> {
  // Missing await — the delete runs eventually, but this function returns immediately
  userRepository.delete(id);  // Returns a Promise that nobody awaits
}

// With @typescript-eslint/no-floating-promises: 'error'
// ESLint reports: "Promises must be awaited, end with a call to .catch,
// end with a call to .then with a rejection handler, or be explicitly marked
// as ignored with the `void` operator."

// Fix:
await userRepository.delete(id);

// Or explicitly acknowledged (for fire-and-forget — use sparingly):
void userRepository.sendWelcomeEmail(id);

This is the TypeScript equivalent of enabling <Nullable>enable</Nullable> in C# — it prevents an entire class of runtime errors at compile time.

Prettier — The Formatter

Prettier is an opinionated code formatter. “Opinionated” means it has almost no configuration options by design. You give it code, it gives you consistently formatted code. There is no “use 3-space indentation with my specific brace style” — Prettier makes those decisions and they are not negotiable per-project.

This is intentional. The value of Prettier is that formatting discussions end. There are no code review comments about trailing commas or quote style. Everyone’s editor formats on save to the same output.

// .prettierrc — the full set of meaningful options
{
  "semi": true,
  "singleQuote": true,
  "trailingComma": "all",
  "printWidth": 100,
  "tabWidth": 2,
  "useTabs": false,
  "arrowParens": "always"
}

These are our defaults. The options above are the ones teams actually debate — everything else Prettier decides without asking you.

OptionOur SettingWhat It Does
semitrueSemicolons at end of statements (matches C# convention)
singleQuotetrueSingle quotes for strings (JS/TS convention)
trailingComma"all"Trailing commas everywhere (cleaner git diffs)
printWidth100Wrap lines at 100 characters
tabWidth22 spaces per indent level (JS/TS ecosystem standard)

Add a .prettierignore for files Prettier should not touch:

# .prettierignore
dist/
node_modules/
.next/
coverage/
*.md

Install Prettier:

pnpm add -D prettier eslint-config-prettier

The eslint-config-prettier package disables ESLint formatting rules that would conflict with Prettier. It must be the last item in your ESLint config’s extends array. Without it, ESLint and Prettier fight over indentation and quotes, and every save produces a cycle of reformatting.

VS Code Integration

Install two extensions:

  • ESLint (dbaeumer.vscode-eslint) — surfaces ESLint errors inline
  • Prettier - Code Formatter (esbenp.prettier-vscode) — formats on save

Configure your workspace settings to format on save and use Prettier as the default formatter:

// .vscode/settings.json — commit this to share settings with the team
{
  "editor.formatOnSave": true,
  "editor.defaultFormatter": "esbenp.prettier-vscode",
  "editor.codeActionsOnSave": {
    "source.fixAll.eslint": "explicit"
  },
  "[typescript]": {
    "editor.defaultFormatter": "esbenp.prettier-vscode"
  },
  "[typescriptreact]": {
    "editor.defaultFormatter": "esbenp.prettier-vscode"
  },
  "typescript.preferences.importModuleSpecifier": "relative",
  "eslint.validate": ["javascript", "javascriptreact", "typescript", "typescriptreact"]
}

Also commit a .vscode/extensions.json to recommend extensions to anyone who opens the repo:

// .vscode/extensions.json
{
  "recommendations": [
    "dbaeumer.vscode-eslint",
    "esbenp.prettier-vscode",
    "ms-vscode.vscode-typescript-next"
  ]
}

CLI Commands

Add these to package.json scripts:

{
  "scripts": {
    "lint": "eslint src --ext .ts,.tsx",
    "lint:fix": "eslint src --ext .ts,.tsx --fix",
    "format": "prettier --write src",
    "format:check": "prettier --check src",
    "typecheck": "tsc --noEmit"
  }
}

The difference between --write and --check:

  • prettier --write modifies files in place — use during development
  • prettier --check exits with a non-zero code if any file would be reformatted — use in CI

ESLint --fix auto-corrects issues that have fixers (missing semicolons, import ordering, trailing commas). Not every rule has a fixer — logic issues like no-floating-promises must be fixed manually.

Pre-Commit Hooks with Husky and lint-staged

Pre-commit hooks run before each commit and reject the commit if they fail. This prevents broken code from entering the repository, the same way a failing CI build prevents broken code from merging. The combination of Husky (Git hooks management) and lint-staged (run linters on only the staged files, not the entire codebase) is the standard.

# Install
pnpm add -D husky lint-staged

# Initialize Husky (creates .husky/ directory)
pnpm exec husky init

This creates .husky/pre-commit. Replace its contents:

#!/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"

pnpm exec lint-staged

Configure lint-staged in package.json:

{
  "lint-staged": {
    "*.{ts,tsx}": [
      "prettier --write",
      "eslint --fix"
    ],
    "*.{json,css,md}": [
      "prettier --write"
    ]
  }
}

What this does: when you run git commit, Husky triggers the pre-commit hook, which runs lint-staged. lint-staged runs Prettier and ESLint on only the files you’ve staged — not the entire repository. If ESLint finds unfixable errors, the commit is rejected and the error output tells you exactly what to fix.

This is the equivalent of having StyleCop run as a commit gate. The key difference: pre-commit hooks in Git are client-side only and can be bypassed with git commit --no-verify. CI enforcement (the next step) is the authoritative gate.

CI Integration

In your GitHub Actions CI workflow, run the checks without auto-fixing:

# .github/workflows/ci.yml
name: CI

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  quality:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: pnpm/action-setup@v4
        with:
          version: 9

      - uses: actions/setup-node@v4
        with:
          node-version: '22'
          cache: 'pnpm'

      - name: Install dependencies
        run: pnpm install --frozen-lockfile

      - name: Type check
        run: pnpm typecheck

      - name: Lint
        run: pnpm lint

      - name: Format check
        run: pnpm format:check

      - name: Test
        run: pnpm test

Run pnpm lint (not pnpm lint:fix) in CI — you want the build to fail on lint errors, not silently fix and continue. The same applies to pnpm format:check vs. pnpm format.

SonarCloud Integration

SonarCloud (covered in depth in Article 7.2) analyzes TypeScript code for code smells, duplication, security hotspots, and coverage. It works alongside ESLint rather than replacing it — ESLint handles TypeScript-specific rules that SonarCloud’s TypeScript analyzer doesn’t cover as well, and SonarCloud provides the project-level dashboard, trend tracking, and quality gates that ESLint does not.

The integration is straightforward: SonarCloud reads your ESLint configuration and coverage reports from CI. In sonar-project.properties:

sonar.projectKey=your-org_your-repo
sonar.organization=your-org
sonar.sources=src
sonar.tests=src
sonar.test.inclusions=**/*.test.ts,**/*.spec.ts
sonar.coverage.exclusions=**/*.test.ts,**/*.spec.ts
sonar.javascript.lcov.reportPaths=coverage/lcov.info
sonar.eslint.reportPaths=eslint-report.json

Generate the ESLint report in a format SonarCloud understands:

{
  "scripts": {
    "lint:report": "eslint src --ext .ts,.tsx -f json -o eslint-report.json || true"
  }
}

The || true prevents the CI step from failing before SonarCloud can read the report — SonarCloud’s quality gate handles the failure decision.


Key Differences

Concern.NETTypeScript
LintingRoslyn analyzers + StyleCopESLint + @typescript-eslint
Formatting.editorconfig + Roslyn formattingPrettier
Rule config location.csproj, global.json, .editorconfigeslint.config.mjs, .prettierrc
Type-aware rulesBuilt into Roslyn (semantic model)@typescript-eslint (needs parserOptions.project)
Format on saveVisual Studio built-inVS Code Prettier extension
Pre-commit enforcementNot built-in (CI only)Husky + lint-staged
CI enforcementdotnet build fails on analyzer errorseslint + prettier --check exit codes
Auto-fixQuick Fix in VS (not CLI)eslint --fix, prettier --write
Severity levelsError, Warning, Info, Hiddenerror, warn, off
Suppress a warning// ReSharper disable ... / #pragma warning disable// eslint-disable-next-line rule-name

Gotchas for .NET Engineers

1. ESLint and Prettier Must Not Configure the Same Rules

The most common misconfiguration when setting up a new project: ESLint’s formatting rules (indent, quotes, semicolons) and Prettier’s formatting rules collide. ESLint fixes indentation to 4 spaces; Prettier reformats to 2 spaces; ESLint re-flags it. The result is a loop where every save generates changes.

The fix is eslint-config-prettier — a config that disables all ESLint rules that handle formatting. It must be the last item in your ESLint config so it overrides everything else:

// eslint.config.mjs — prettier config MUST be last
export default tseslint.config(
  js.configs.recommended,
  ...tseslint.configs.recommended,
  prettier, // Last — disables ESLint formatting rules
  {
    rules: { /* your custom rules */ },
  },
);

If you see lint errors about indentation or quote style that Prettier is already handling, eslint-config-prettier is either missing or not last in the config.

2. TypeScript-Aware Rules Require parserOptions.project

Some @typescript-eslint rules — including no-floating-promises, no-unsafe-assignment, and switch-exhaustiveness-check — require type information to work. They need to run with access to the TypeScript compiler’s type model. Without it, they either silently skip or produce incorrect results.

Configure this in your flat config:

// eslint.config.mjs
import tseslint from 'typescript-eslint';

export default tseslint.config(
  ...tseslint.configs.recommendedTypeChecked, // Type-checked rules
  {
    languageOptions: {
      parserOptions: {
        project: true,        // Use the tsconfig.json in the same directory
        tsconfigRootDir: import.meta.dirname,
      },
    },
  },
);

The trade-off: type-checked rules are slower because they invoke the TypeScript compiler. On a large codebase, eslint with type-checking can take 30-60 seconds vs. 5-10 seconds without it. In CI this is acceptable. For editor integration, VS Code’s ESLint extension handles incremental analysis — it does not re-check the entire project on every keystroke.

If lint performance becomes a problem, split your config: type-checked rules for CI, non-type-checked for pre-commit hooks (where speed matters more).

3. Pre-Commit Hooks Are Client-Side and Can Be Bypassed

git commit --no-verify skips all pre-commit hooks. Engineers in a hurry will do this — especially when they’re told “just commit this quick fix.” Pre-commit hooks are a developer experience feature, not a security control. CI is the enforcing gate.

If your CI workflow does not run pnpm lint and pnpm format:check, then an engineer who bypasses the hook can merge code that violates your standards. The pre-commit hook catches issues early (faster feedback loop). CI enforces them (cannot be bypassed). Both are necessary.

Additionally, Husky only works after pnpm install runs. On a fresh clone, a developer who runs git commit before pnpm install will have no hooks. Document this in your project README and consider adding a script to .github/CONTRIBUTING.md.

4. Prettier’s Defaults May Conflict With Your Existing Conventions

If your team has been using 4-space indentation (C# convention), Prettier’s 2-space default will reformat your entire codebase on first run. This is a large, noisy commit that makes git blame less useful.

Options:

  1. Accept it: do a single “format entire codebase” commit with a clear commit message, then move on. This is the right call for new projects or projects with few existing files.
  2. Configure Prettier’s tabWidth: 4 to match your existing convention. The 2-space default is the strong JS/TS community norm, though — teams that deviate from it encounter friction with copy-pasted examples and library source code.

For new projects, use the 2-space default. For migrating existing TS codebases, do the format commit intentionally and communicate it to the team.


Hands-On Exercise

Set up ESLint, Prettier, and Husky on a fresh TypeScript project.

Step 1: Create the project and install dependencies

mkdir lint-exercise && cd lint-exercise
pnpm init
pnpm add -D typescript @types/node
pnpm add -D eslint @eslint/js typescript-eslint eslint-config-prettier
pnpm add -D prettier
pnpm add -D husky lint-staged

Step 2: Initialize TypeScript

// tsconfig.json
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "strict": true,
    "outDir": "dist"
  },
  "include": ["src"]
}

Step 3: Create the ESLint config

// eslint.config.mjs
import js from '@eslint/js';
import tseslint from 'typescript-eslint';
import prettier from 'eslint-config-prettier';

export default tseslint.config(
  js.configs.recommended,
  ...tseslint.configs.recommendedTypeChecked,
  prettier,
  {
    languageOptions: {
      parserOptions: {
        project: true,
        tsconfigRootDir: import.meta.dirname,
      },
    },
    rules: {
      '@typescript-eslint/no-floating-promises': 'error',
      '@typescript-eslint/no-explicit-any': 'warn',
      'no-console': ['warn', { allow: ['warn', 'error'] }],
    },
  },
  { ignores: ['dist/', 'node_modules/'] },
);

Step 4: Create the Prettier config

// .prettierrc
{
  "semi": true,
  "singleQuote": true,
  "trailingComma": "all",
  "printWidth": 100,
  "tabWidth": 2
}

Step 5: Add scripts and lint-staged config to package.json

{
  "scripts": {
    "lint": "eslint src",
    "lint:fix": "eslint src --fix",
    "format": "prettier --write src",
    "format:check": "prettier --check src",
    "typecheck": "tsc --noEmit"
  },
  "lint-staged": {
    "*.{ts,tsx}": ["prettier --write", "eslint --fix"],
    "*.json": ["prettier --write"]
  }
}

Step 6: Initialize Husky

pnpm exec husky init
echo "pnpm exec lint-staged" > .husky/pre-commit

Step 7: Write intentionally bad code and verify the tools catch it

// src/index.ts
async function fetchData(): Promise<string> {
  return Promise.resolve('data')
}

// Floating promise — will be caught by ESLint
function callWithoutAwait() {
  fetchData()  // Missing await
}

// Inconsistent formatting — will be caught by Prettier
const x={a:1,b:2,c:3}

Run:

pnpm typecheck    # Should pass — no type errors
pnpm lint         # Should error: no-floating-promises, semi, etc.
pnpm format:check # Should error: formatting issues
pnpm lint:fix     # Fix what can be auto-fixed
pnpm format       # Fix formatting
pnpm lint         # Should still error on no-floating-promises (must be fixed manually)

Fix the floating promise manually, then verify pnpm lint passes. Now stage the file and make a commit — Husky should run lint-staged automatically.


Quick Reference

Tool Responsibilities

ToolHandlesDoes Not Handle
ESLintLogic errors, async patterns, unused vars, type safety rulesWhitespace, indentation, quotes
PrettierIndentation, line length, quotes, semicolons, trailing commasLogic, correctness, types
eslint-config-prettierDisabling ESLint formatting rules that conflict with PrettierNothing by itself
HuskyRunning hooks at Git eventsEnforcement (client-side only)
lint-stagedRunning tools on staged files only (fast)Running on full codebase
TypeScript compiler (tsc)Type checkingLinting, formatting

Command Reference

CommandPurposeWhen to Use
pnpm lintCheck for ESLint issuesCI, pre-push
pnpm lint:fixFix auto-fixable ESLint issuesDevelopment
pnpm formatFormat all files with PrettierBefore commit
pnpm format:checkCheck formatting without modifyingCI
pnpm typecheckType-check without emittingCI, pre-push

ESLint Rule Reference

RuleWhy It Matters for .NET Engineers
@typescript-eslint/no-floating-promisesCatches missing await — the #1 async gotcha
@typescript-eslint/no-explicit-anyEnforces the unknown discipline
@typescript-eslint/consistent-type-importsReduces bundle size by marking type-only imports
@typescript-eslint/switch-exhaustiveness-checkCompiler-enforced exhaustive discriminated unions
@typescript-eslint/no-unused-varsSame as C#’s unused variable warnings
no-consoleKeeps log statements out of production code

Disabling Rules

// Disable for one line
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const data: any = JSON.parse(raw);

// Disable for a block
/* eslint-disable @typescript-eslint/no-explicit-any */
function parseUntypedResponse(raw: string): any {
  return JSON.parse(raw);
}
/* eslint-enable @typescript-eslint/no-explicit-any */

// Disable for an entire file (rarely correct)
/* eslint-disable */

Further Reading

Debugging TypeScript: Visual Studio vs. VS Code & Chrome DevTools

For .NET engineers who know: Visual Studio’s integrated debugger — breakpoints, watch windows, call stacks, exception settings, Immediate Window, conditional breakpoints, and attaching to processes You’ll learn: How to set up equivalent debugging for TypeScript in VS Code and Chrome DevTools, what source maps are and why they matter, and the practical debugging workflows for Node.js/NestJS, Next.js, and Vitest Time: 15-20 min read

Visual Studio’s debugger is one of .NET’s genuine advantages. You attach a breakpoint, press F5, and within seconds you’re inspecting live variable values, stepping through code, and evaluating expressions. The integrated experience — breakpoints in the same editor you write code, watch windows, exception settings, Immediate Window — is mature and reliable.

Debugging TypeScript requires more setup. The reason is architectural: TypeScript is compiled to JavaScript before running. What executes at runtime is JavaScript, not TypeScript. A stack trace points to a JavaScript file and line number that you never wrote. Source maps bridge this gap, but you need to understand them to configure debugging correctly. Once configured, VS Code’s debugger is fully capable — breakpoints, watches, call stacks, conditional expressions. The path there just requires deliberate setup rather than the F5-and-go experience you’re used to.


The .NET Way (What You Already Know)

In .NET, the CLR executes your C# directly (via JIT compilation to native code). The PDB (Program Database) file maps IL instructions back to source code lines. Visual Studio reads PDB files automatically — you never configure this. Press F5, and the debugger attaches to the process. Breakpoints you set in .cs files work because Visual Studio knows the mapping between source lines and executable addresses.

// Visual Studio debugger experience:
// 1. Set breakpoint on this line
// 2. Press F5
// 3. Hover over 'user' to inspect its properties
// 4. Add 'user.Email' to the Watch window
// 5. Step into GetUser() with F11

[HttpGet("{id}")]
public async Task<ActionResult<UserDto>> GetUser(int id)
{
    var user = await _userService.GetByIdAsync(id);  // Breakpoint here
    if (user is null) return NotFound();
    return _mapper.Map<UserDto>(user);
}

The Immediate Window lets you execute arbitrary C# expressions in the context of the current stack frame. Exception Settings let you break on first-chance exceptions. Attach to Process lets you debug a running production-like environment. All of this is configured through GUI dialogs and Just Works.

None of that transfers automatically to TypeScript. But all of it is achievable.


The TypeScript Way

Source Maps — The PDB Equivalent

A source map is a JSON file that maps positions in the generated JavaScript back to positions in the original TypeScript source. It is TypeScript’s equivalent of a PDB file.

Without source maps, a stack trace looks like this:

TypeError: Cannot read properties of undefined (reading 'email')
    at Object.<anonymous> (/app/dist/users/users.service.js:47:23)
    at step (/app/dist/users/users.service.js:33:23)
    at Object.next (/app/dist/users/users.service.js:14:53)

With source maps enabled and a debugger that understands them, the same error points to:

TypeError: Cannot read properties of undefined (reading 'email')
    at UsersService.getById (/app/src/users/users.service.ts:31:15)

Enable source maps in tsconfig.json:

{
  "compilerOptions": {
    "sourceMap": true,        // Generate .js.map files alongside .js files
    "inlineSources": true,    // Embed the TS source in the map (optional — easier for Sentry)
    "outDir": "dist"
  }
}

When TypeScript compiles users.service.ts to dist/users/users.service.js, it also generates dist/users/users.service.js.map. The .map file is a JSON document containing the mapping from each character position in the .js file to the corresponding position in the .ts file. The debugger reads this mapping to show you TypeScript source while executing JavaScript.

VS Code launch.json — The Debug Configuration

The launch.json file in .vscode/ tells VS Code how to start and attach to processes for debugging. This is equivalent to configuring Visual Studio’s debug targets in project properties.

NestJS / Node.js — Debug the API Server

// .vscode/launch.json — NestJS/Node.js configurations
{
  "version": "0.2.0",
  "configurations": [
    {
      "name": "NestJS: Debug (ts-node)",
      "type": "node",
      "request": "launch",
      "runtimeArgs": ["-r", "ts-node/register", "-r", "tsconfig-paths/register"],
      "args": ["src/main.ts"],
      "cwd": "${workspaceFolder}",
      "env": {
        "NODE_ENV": "development",
        "TS_NODE_PROJECT": "tsconfig.json"
      },
      "sourceMaps": true,
      "outFiles": ["${workspaceFolder}/dist/**/*.js"],
      "console": "integratedTerminal",
      "internalConsoleOptions": "neverOpen"
    },
    {
      "name": "NestJS: Attach to Running Process",
      "type": "node",
      "request": "attach",
      "port": 9229,
      "sourceMaps": true,
      "outFiles": ["${workspaceFolder}/dist/**/*.js"],
      "restart": true
    },
    {
      "name": "NestJS: Debug Compiled (dist/)",
      "type": "node",
      "request": "launch",
      "program": "${workspaceFolder}/dist/main.js",
      "sourceMaps": true,
      "outFiles": ["${workspaceFolder}/dist/**/*.js"],
      "preLaunchTask": "npm: build"
    }
  ]
}

The "NestJS: Attach to Running Process" configuration is the most useful for daily development. Start your NestJS server with the --inspect flag and then attach:

# Start NestJS with the Node.js inspector enabled
node --inspect -r ts-node/register -r tsconfig-paths/register src/main.ts

# Or add to package.json scripts:
# "debug": "node --inspect -r ts-node/register -r tsconfig-paths/register src/main.ts"

With --inspect, Node.js listens on ws://127.0.0.1:9229 for a debugger connection. Press F5 in VS Code with the “Attach” configuration selected, and VS Code connects to the running process. Set a breakpoint in your TypeScript source — it will be hit when that code executes.

Next.js — Debug the Full-Stack App

Next.js needs two debugger sessions: one for the Node.js server process (Server Components, API routes), and the browser DevTools for client components. VS Code can run both:

// .vscode/launch.json — Next.js configurations
{
  "version": "0.2.0",
  "configurations": [
    {
      "name": "Next.js: Server-Side Debug",
      "type": "node",
      "request": "launch",
      "program": "${workspaceFolder}/node_modules/.bin/next",
      "args": ["dev"],
      "cwd": "${workspaceFolder}",
      "env": {
        "NODE_OPTIONS": "--inspect"
      },
      "sourceMaps": true,
      "outFiles": ["${workspaceFolder}/.next/**/*.js"],
      "console": "integratedTerminal"
    },
    {
      "name": "Next.js: Attach to Chrome",
      "type": "chrome",
      "request": "attach",
      "port": 9222,
      "urlFilter": "http://localhost:3000/*",
      "sourceMaps": true,
      "webRoot": "${workspaceFolder}"
    }
  ],
  "compounds": [
    {
      "name": "Next.js: Full Stack Debug",
      "configurations": ["Next.js: Server-Side Debug", "Next.js: Attach to Chrome"]
    }
  ]
}

The compounds entry lets you launch both configurations simultaneously with a single F5. You can then set breakpoints in Server Component code (hit by the Node.js debugger) and Client Component code (hit by the Chrome debugger).

Start Chrome with debugging enabled for the “Attach to Chrome” configuration:

# macOS
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --remote-debugging-port=9222

# Or use VS Code's built-in browser launch (simpler):
# Change "request": "attach" to "request": "launch" and add "url": "http://localhost:3000"

Debugging in Vitest

Vitest, our test runner, supports breakpoint debugging through VS Code:

// .vscode/launch.json — add to configurations array
{
  "name": "Vitest: Debug Current File",
  "type": "node",
  "request": "launch",
  "autoAttachChildProcesses": true,
  "skipFiles": ["<node_internals>/**", "**/node_modules/**"],
  "program": "${workspaceFolder}/node_modules/.bin/vitest",
  "args": ["run", "${relativeFile}"],
  "smartStep": true,
  "console": "integratedTerminal",
  "sourceMaps": true
},
{
  "name": "Vitest: Debug All Tests",
  "type": "node",
  "request": "launch",
  "autoAttachChildProcesses": true,
  "skipFiles": ["<node_internals>/**", "**/node_modules/**"],
  "program": "${workspaceFolder}/node_modules/.bin/vitest",
  "args": ["run"],
  "smartStep": true,
  "console": "integratedTerminal",
  "sourceMaps": true
}

With these configurations, open a test file, set a breakpoint inside a test, and press F5 with “Vitest: Debug Current File” selected. The debugger will hit your breakpoint when that test executes — exactly equivalent to debugging a unit test in Visual Studio’s Test Explorer.

You can also use vitest --inspect-brk for attaching from a terminal:

# Start Vitest in debug mode — pauses before first test
pnpm exec vitest --inspect-brk run src/users/users.service.test.ts
# Then attach with VS Code's "Attach to Running Process" configuration

Chrome DevTools — Frontend Debugging

For client-side React and Vue code, Chrome DevTools is the primary debugger. The Sources panel in DevTools shows your TypeScript source (via source maps), allows setting breakpoints, and provides a watch window and call stack identical in capability to Visual Studio.

Open DevTools with F12. Navigate to Sources > localhost:3000 > your TypeScript files. Set a breakpoint by clicking the line number. Reload the page if the code you want to debug runs on load.

The most useful DevTools panels for TypeScript debugging:

  • Sources — Breakpoints, step-through, watch expressions, call stack. Your primary debugging panel.
  • Network — Inspect API requests and responses. Invaluable for debugging data fetching.
  • Console — Evaluate expressions in the current page context. Equivalent to Visual Studio’s Immediate Window for running code.
  • Application — Inspect localStorage, sessionStorage, cookies, IndexedDB.
  • Performance — Profile rendering and JavaScript execution (rarely needed unless investigating performance regressions).

The debugger Statement

The debugger statement is a hardcoded breakpoint. When DevTools (or VS Code) is open and a debugger statement is reached, execution pauses exactly as if you’d set a breakpoint in the UI.

async function processOrder(orderId: OrderId): Promise<void> {
  const order = await orderRepository.findById(orderId);

  debugger; // Execution pauses here when DevTools is open
  // Inspect 'order' in the Sources panel or Console

  const result = await paymentService.charge(order.total);
  // ...
}

This is the equivalent of programmatic breakpoints in C# (System.Diagnostics.Debugger.Break()). Use it for situations where you can’t set a breakpoint through the UI — code generated at runtime, event handlers attached by a library, or code that runs before the debugger finishes attaching.

Remove debugger statements before committing. The no-debugger ESLint rule catches any that slip through.

Structured Console Logging That Does Not Suck

Raw console.log calls scattered throughout a codebase are the TypeScript equivalent of littering your C# code with Debug.WriteLine. They don’t belong in production, they’re not queryable, and they make log output unreadable. But console.log is genuinely useful during development when configured well.

// Avoid: unstructured, un-searchable, removed before commit
console.log('user', user);
console.log('processing order');

// Better: structured, labeled, still removed before commit but more useful while present
console.log('[UsersService.getById]', { id, user });
console.log('[OrderService.process] Starting', { orderId, total: order.total });

// Best: use a real logger in service code (stays in production)
import { Logger } from '@nestjs/common';

@Injectable()
export class UsersService {
  private readonly logger = new Logger(UsersService.name);

  async getById(id: UserId): Promise<User | null> {
    this.logger.debug('Fetching user', { id });
    const user = await this.db.user.findUnique({ where: { id } });
    this.logger.debug('Fetched user', { id, found: user !== null });
    return user;
  }
}

For temporary debugging during development, use console.table for arrays and console.dir for deep object inspection:

// Inspect an array of objects as a table — much more readable than JSON.stringify
console.table(users.map(u => ({ id: u.id, name: u.name, role: u.role })));

// Deep inspect with full prototype chain (Node.js)
console.dir(complexObject, { depth: null });

// Time a section of code
console.time('db-query');
const results = await db.user.findMany({ where: { active: true } });
console.timeEnd('db-query'); // Outputs: "db-query: 143ms"

For production code in NestJS, use the built-in Logger or a structured logger like Pino (see Article 4.9). Structured JSON logs are queryable in log aggregation systems; console.log is not.

Node.js --inspect Flag

The --inspect flag is the Node.js equivalent of attaching a debugger to a .NET process. It opens a WebSocket server that accepts debugger connections.

# Basic inspect — listens on port 9229
node --inspect src/main.js

# With ts-node (TypeScript source, no compile step)
node --inspect -r ts-node/register src/main.ts

# Break immediately on start — useful for debugging startup code
node --inspect-brk src/main.js

# Change the port (useful when 9229 is already in use)
node --inspect=0.0.0.0:9230 src/main.js

When --inspect is active, Chrome itself can attach directly to Node.js. Open chrome://inspect in Chrome and click “inspect” next to your Node.js process. This opens a DevTools window connected to your server-side Node.js process — the same interface you use for frontend debugging, but running your backend code.

This is particularly useful for debugging Prisma queries, examining request data, and stepping through NestJS service code without a separate VS Code configuration.

React DevTools and Vue DevTools

These are browser extensions that add a Components panel to Chrome DevTools, giving you a tree view of the component hierarchy with live prop and state inspection.

React DevTools (Chrome/Firefox extension):

  • Components panel: Select any component in the tree, inspect its current props, state, and hooks (including useState values, useRef, context values)
  • Profiler panel: Record renders and identify which components re-render and why

Vue DevTools (Chrome/Firefox extension):

  • Component inspector: Equivalent to React DevTools’ Components panel
  • Pinia inspector: Inspect store state and track mutations
  • Timeline: Record events and mutations with timestamps

These tools are essential for diagnosing the most common frontend bug class: “the component isn’t rendering the data I expect.” Rather than adding console.log inside the component, inspect props and state directly in the extension.


Key Differences

.NET / Visual StudioTypeScript / VS Code + DevTools
PDB files for source mappingSource maps (.js.map files)
F5 to launch with debuggerF5 in VS Code with launch.json configured
Attach to Process dialognode --inspect + VS Code “Attach” config
Immediate WindowChrome DevTools Console, VS Code Debug Console
Watch WindowDevTools Watch expressions, VS Code Watch panel
Exception Settings dialogpause on caught/uncaught exceptions in DevTools
Edit and ContinueNot supported — restart process after changes
Step Into (F11)F11 in VS Code / DevTools
Step Over (F10)F10 in VS Code / DevTools
Step Out (Shift+F11)Shift+F11 in VS Code / DevTools
Debug.WriteLine()debugger statement or console.log
Roslyn exception windowDevTools “pause on exceptions” checkbox
Test Explorer debuggingVitest launch configuration in VS Code
[Conditional("DEBUG")]if (process.env.NODE_ENV === 'development')
Application Insights Live MetricsSentry Performance (production tracing)

Gotchas for .NET Engineers

1. Source Maps Must Match the Running Code

The most common reason breakpoints “don’t hit” or show in the wrong location: the source map is stale. If you compiled TypeScript to dist/ an hour ago, then made changes to the TypeScript source without recompiling, the source map no longer matches the running code. The debugger will show breakpoints in the wrong location, or they will simply not trigger.

Solutions:

  • Use ts-node for development (executes TypeScript directly, no compile step, always current)
  • Use a --watch compiler mode so dist/ rebuilds automatically on changes
  • When using compiled output for debugging, always run pnpm build before attaching the debugger
# In one terminal — watch mode recompiles on every save
pnpm exec tsc --watch

# In another terminal — start Node.js, watching for changes
node --inspect dist/main.js
# Then use nodemon or equivalent to auto-restart on dist/ changes

With NestJS’s dev server (pnpm dev / nest start --watch), this is handled automatically — the dev server recompiles and restarts on changes. The attach-based debugging workflow works cleanly here.

2. console.log Output Is Your Primary Stack Trace in Some Environments

In C#, an uncaught exception gives you a full stack trace with file names, line numbers, and the exact position of the throw. In TypeScript, this only works cleanly in environments that have loaded source maps.

In Node.js production environments (minified, bundled code without source maps loaded), a stack trace will point to bundle.js:1:14823 — useless. This is why source maps matter in production for error tracking (Sentry reads source maps to transform these stack traces back to TypeScript), and why structured logging with context is more valuable than stack traces for production diagnosis.

During local development:

  • ts-node always gives you correct TypeScript stack traces (no compile step, no source map mismatch)
  • The Node.js --inspect debugger with VS Code shows correct TypeScript locations
  • console.error(new Error('message')) prints a full stack trace to the console including TypeScript source locations (if source maps are configured)

In production:

  • Configure Sentry to upload source maps during deployment (Article 7.1)
  • Never rely on production stack traces without source map support

3. TypeScript Errors in One Place, Runtime Errors in Another

TypeScript’s type system catches one class of errors at compile time. But TypeScript types are erased at runtime. This means:

  • A variable typed as string might be undefined at runtime if data came from an unvalidated external source
  • A function that TypeScript says returns User might return null if Prisma’s query returns null and you’ve mistyped the return
  • An as cast bypasses type checking — const user = data as User tells TypeScript to trust you, but the runtime value may not match

When you encounter a runtime error that seems impossible given the TypeScript types, the first questions to ask:

  1. Did this data come through a Zod-validated boundary, or did it come in unvalidated?
  2. Is there an as cast somewhere in the chain that overrode type safety?
  3. Did an await get dropped, causing the Promise object itself to be assigned to a variable typed as the resolved value?
// This TypeScript error misleads you — the runtime error is different
async function getUser(id: string): Promise<User> {
  return db.user.findFirst({ where: { id } }); // Returns User | null
  // TypeScript error: Type 'User | null' is not assignable to type 'User'
}

// Fix the TypeScript error with a cast — now you have a runtime problem
async function getUser(id: string): Promise<User> {
  return db.user.findFirst({ where: { id } }) as Promise<User>; // Suppresses error
  // At runtime: user is null, next code throws "Cannot read property 'email' of null"
}

// Correct fix — handle the null case
async function getUser(id: string): Promise<User> {
  const user = await db.user.findFirst({ where: { id } });
  if (!user) throw new NotFoundException(`User ${id} not found`);
  return user;
}

When debugging runtime errors that TypeScript didn’t predict, add a debugger statement at the point where the suspicious value is used and inspect its actual runtime type in the debugger — don’t trust what TypeScript says it is.

4. Hot Reload Is Not Edit and Continue

Visual Studio’s Edit and Continue lets you modify code while paused at a breakpoint and resume execution with the changed code. TypeScript tooling does not support this. When you modify a file during a debugging session:

  • In VS Code with the “Launch” configuration: You must restart the debug session
  • In the “Attach” configuration with a watch-mode server: The server restarts (killing the session), and you re-attach
  • In Chrome DevTools: You can edit sources in DevTools, but changes are in-memory only and do not persist

The practical workaround: use the “Attach” configuration paired with a watch-mode dev server. When you modify code, the server restarts automatically, and you re-attach (which VS Code can be configured to do automatically with "restart": true):

// launch.json — auto-restart attachment after server reload
{
  "name": "NestJS: Attach (Auto-Restart)",
  "type": "node",
  "request": "attach",
  "port": 9229,
  "restart": true,  // Re-attach when the process restarts
  "sourceMaps": true,
  "outFiles": ["${workspaceFolder}/dist/**/*.js"]
}

This gives you a reasonable approximation of Edit and Continue: modify code, save, server restarts, debugger re-attaches, you set your breakpoints again. It’s not as seamless as Visual Studio, but it works.


Common Debugging Scenarios

API Not Returning Expected Data

  1. Set a breakpoint in the NestJS controller method handling the request
  2. Attach VS Code with the NestJS Attach configuration
  3. Make the API request (from browser, Postman, or curl)
  4. Inspect the incoming dto or @Param values to verify request data is correct
  5. Step into the service call to see what the database returns
  6. Check whether data transformations are producing the expected output

Component Not Re-Rendering in React/Vue

  1. Open React/Vue DevTools in Chrome
  2. Select the component that should be re-rendering
  3. Inspect its current props and state
  4. Trigger the action that should cause re-render
  5. Watch for prop/state changes in DevTools — if props change but rendering doesn’t update, check that the component is correctly using the prop (not copying it into local state that doesn’t update)

Type Error at Runtime That TypeScript Didn’t Catch

  1. Add debugger at the point where the runtime error occurs
  2. Inspect the actual runtime type of the suspicious variable (use typeof variable in the Console)
  3. Trace backwards to where the value entered the system
  4. Look for as casts or unvalidated external data

Test Failing With Unclear Output

  1. Add the Vitest debug configuration to launch.json
  2. Open the failing test file
  3. Set a breakpoint inside the failing test
  4. Press F5 with “Vitest: Debug Current File”
  5. Inspect the actual and expected values at the point of failure

Hands-On Exercise

This exercise sets up a complete debugging configuration for a NestJS API and verifies it works.

Step 1: Create a minimal NestJS project

pnpm dlx @nestjs/cli new debug-exercise
cd debug-exercise

Step 2: Enable source maps

Verify tsconfig.json has:

{
  "compilerOptions": {
    "sourceMap": true
  }
}

Step 3: Create the launch.json

Create .vscode/launch.json with the NestJS attach configuration from this article. Then add this script to package.json:

{
  "scripts": {
    "debug": "node --inspect -r ts-node/register -r tsconfig-paths/register src/main.ts"
  }
}

Step 4: Add a deliberate bug

In src/app.controller.ts, add a method:

@Get('user/:id')
getUser(@Param('id') id: string) {
  const users = [
    { id: '1', name: 'Alice', role: 'admin' },
    { id: '2', name: 'Bob', role: 'viewer' },
  ];
  // Bug: this returns undefined for unknown IDs, but the type says it returns an object
  return users.find(u => u.id === id);
}

Step 5: Debug the endpoint

  1. Run pnpm debug in one terminal
  2. Press F5 in VS Code with “NestJS: Attach to Running Process”
  3. Set a breakpoint inside getUser
  4. Navigate to http://localhost:3000/user/1 in a browser
  5. Verify the breakpoint hits, inspect id and users
  6. Use the Debug Console to evaluate users.find(u => u.id === '99') and see undefined

Step 6: Add a Vitest debug configuration and debug a test

Create src/app.controller.spec.ts:

import { describe, it, expect } from 'vitest';

describe('AppController', () => {
  it('returns undefined for unknown user IDs', () => {
    const users = [{ id: '1', name: 'Alice' }];
    const result = users.find(u => u.id === '99');
    expect(result).toBeUndefined(); // Set breakpoint here
  });
});

Add the Vitest configuration to launch.json, set a breakpoint inside the test, and press F5. Verify the debugger pauses at the breakpoint.


Quick Reference

launch.json Configurations Summary

ConfigurationUse ForKey Settings
NestJS Launch (ts-node)Start and debug in one stepruntimeArgs: ["-r", "ts-node/register"]
NestJS AttachAttach to running --inspect server"request": "attach", "port": 9229
Next.js Server DebugDebug Server Components and API routesNODE_OPTIONS: "--inspect"
Next.js Chrome AttachDebug Client Components"type": "chrome"
Vitest DebugDebug failing tests"program": "node_modules/.bin/vitest"

Keyboard Shortcuts (VS Code)

ActionShortcutVisual Studio Equivalent
Start debuggingF5F5
Stop debuggingShift+F5Shift+F5
Toggle breakpointF9F9
Step OverF10F10
Step IntoF11F11
Step OutShift+F11Shift+F11
ContinueF5 (while paused)F5
Open Debug ConsoleCtrl+Shift+YImmediate Window: Ctrl+Alt+I

Source Map Troubleshooting

SymptomLikely CauseFix
Breakpoints don’t hitSource maps missing or staleAdd "sourceMap": true to tsconfig, rebuild
Stack traces show .js filesSource maps not loaded by runtimeVerify map files exist alongside JS files
Breakpoint in wrong locationCompiled output is outdatedRecompile or use ts-node to skip compile
debugger statement ignoredNo debugger attachedOpen DevTools before running, or attach VS Code

Useful --inspect Commands

# Start with inspector (attach-based debugging)
node --inspect src/main.js

# Break before first line (for debugging startup code)
node --inspect-brk src/main.js

# With ts-node (no compile step needed)
node --inspect -r ts-node/register src/main.ts

# NestJS with nest CLI
nest start --debug

# Vitest with inspector
pnpm exec vitest --inspect-brk run

Further Reading

React Fundamentals for .NET Engineers

For .NET engineers who know: Blazor components, Razor syntax, and component-based UI patterns You’ll learn: How React’s component model, JSX, and one-way data flow map to what you already know from Blazor, and where the mental model diverges Time: 18 min read

The .NET Way (What You Already Know)

Blazor gives you a component model that should feel familiar: each component is a .razor file combining a template (HTML with Razor syntax) and logic (C# code block). Components receive data via [Parameter] attributes, communicate up via EventCallback, and React to state changes automatically through the framework’s rendering cycle. The file structure is the component — one file, one component.

// Counter.razor — a complete Blazor component
@page "/counter"

<h1>Count: @currentCount</h1>
<button @onclick="Increment">Click me</button>

@code {
    [Parameter]
    public int InitialCount { get; set; } = 0;

    private int currentCount;

    protected override void OnInitialized()
    {
        currentCount = InitialCount;
    }

    private void Increment()
    {
        currentCount++;
    }
}

You know this pattern cold. The React equivalent is structurally identical in intent and noticeably different in mechanics. The mapping is close enough that you can transfer your architectural thinking directly; the syntax and some behavioral details are where you need to pay attention.

The React Way

JSX: HTML-in-TypeScript

React components return JSX — a syntax extension that lets you write what looks like HTML directly inside a TypeScript function. This is not a template language parsed separately (like Razor); it is syntactic sugar that the TypeScript compiler (via Babel or esbuild) transforms into plain function calls.

// What you write
const element = <h1 className="title">Hello, world</h1>;

// What the compiler produces
const element = React.createElement("h1", { className: "title" }, "Hello, world");

That transformation is the entire magic of JSX. Once you internalize that JSX is just function calls that return objects describing UI, most of React’s behavior becomes obvious.

Key JSX rules that differ from HTML and Razor:

  • class is className (because class is a reserved word in JavaScript)
  • for on labels is htmlFor
  • All tags must be closed: <br />, not <br>
  • Self-closing tags need the slash: <input />, not <input>
  • Curly braces {} are the escape hatch to TypeScript — equivalent to @ in Razor
  • Style is an object, not a string: style={{ color: 'red', fontSize: 16 }}
  • JSX expressions must return a single root element (or use <>...</> fragment syntax)
// Razor equivalent
// <p>@user.Name is @user.Age years old</p>

// JSX equivalent
const greeting = (
  <p>{user.name} is {user.age} years old</p>
);

Functional Components: The Only Kind You Need

React has two ways to define components: class components (the original model) and functional components (the current model). Class components are legacy. Every new component you write will be a function that accepts a props object and returns JSX.

// The complete anatomy of a functional component

interface GreetingProps {
  name: string;
  role?: string;
}

function Greeting({ name, role = "engineer" }: GreetingProps): JSX.Element {
  return (
    <div>
      <h2>Hello, {name}</h2>
      <p>Role: {role}</p>
    </div>
  );
}

export default Greeting;

The function name is the component name. Capitalization is not optional — React uses it to distinguish HTML elements (lowercase) from components (PascalCase). <div> is an HTML element; <Greeting> is a component call.

Props: Component Parameters

Props are the equivalent of Blazor’s [Parameter] attributes. They flow one direction — parent to child — and the child must not mutate them. This is the “one-way data flow” React is known for, and it is the biggest conceptual shift from two-way binding.

// Blazor: parameters flow in, EventCallback flows up
// UserCard.razor
[Parameter] public string Name { get; set; }
[Parameter] public int Age { get; set; }
[Parameter] public EventCallback<string> OnSelect { get; set; }
// React: props flow in, callback functions flow up
interface UserCardProps {
  name: string;
  age: number;
  onSelect: (name: string) => void;
}

function UserCard({ name, age, onSelect }: UserCardProps) {
  return (
    <div onClick={() => onSelect(name)}>
      <strong>{name}</strong>, age {age}
    </div>
  );
}

The structural pattern is identical: data in, events out. The difference is that React’s “events out” mechanism is just a callback prop — a plain function. There is no EventCallback<T> wrapper or special invocation syntax.

State and Re-rendering

React re-renders a component whenever its state changes. State in functional components is managed via hooks — useState being the fundamental one. When you call the state setter, React schedules a re-render of that component and all its descendants.

import { useState } from "react";

function Counter({ initialCount = 0 }: { initialCount?: number }) {
  const [count, setCount] = useState(initialCount);

  return (
    <div>
      <p>Count: {count}</p>
      <button onClick={() => setCount(count + 1)}>Increment</button>
      <button onClick={() => setCount(0)}>Reset</button>
    </div>
  );
}

useState returns a tuple: the current value and a setter function. The setter triggers the re-render. You never mutate the value directly — count++ will not trigger a re-render.

This is the INotifyPropertyChanged pattern, but the notification mechanism is the setter function rather than a property change event. The effect is the same: update state, UI updates.

The Component Lifecycle

Blazor has explicit lifecycle methods: OnInitialized, OnParametersSet, OnAfterRender, Dispose. React’s functional component model consolidates these into two mechanisms: rendering (the function itself) and effects (useEffect, covered in Article 3.2).

The render phase is simple: React calls your component function, you return JSX, React reconciles that JSX with the current DOM. Your component function must be a pure function of its props and state — given the same inputs, it must return the same output. Side effects (API calls, subscriptions, timers) do not belong in the render body.

// This is the "render" phase — pure, no side effects
function UserProfile({ userId }: { userId: string }) {
  const [user, setUser] = useState<User | null>(null);

  // Side effects go in useEffect (Article 3.2)
  // The render body just describes what to show

  if (user === null) {
    return <p>Loading...</p>;
  }

  return (
    <div>
      <h2>{user.name}</h2>
      <p>{user.email}</p>
    </div>
  );
}

The rough lifecycle equivalence:

BlazorReact (functional)
Constructor / OnInitializeduseEffect(() => { ... }, []) — runs once after first render
OnParametersSetuseEffect(() => { ... }, [prop]) — runs when prop changes
OnAfterRenderuseEffect(() => { ... }) — runs after every render
IDisposable.DisposeReturn value of useEffect callback (cleanup function)
StateHasChanged()setState(...) setter call

Conditional Rendering

Razor uses @if and @switch blocks. JSX uses TypeScript expressions, which means you use ternary operators and logical && for inline conditionals.

// Razor
@if (isLoading) {
    <Spinner />
} else {
    <Content model="@model" />
}
// React — ternary for if/else
{isLoading ? <Spinner /> : <Content model={model} />}

// React — && for "render only if true"
{isAuthenticated && <AdminPanel />}

// React — for complex conditions, extract to a variable
const content = (() => {
  if (isLoading) return <Spinner />;
  if (error) return <ErrorMessage message={error} />;
  return <Content model={model} />;
})();

return <div>{content}</div>;

The && shorthand has a gotcha: if the left side evaluates to 0, React renders 0 (not nothing). Use !!value && or a ternary when the value could be zero.

Rendering Lists

In Razor you use @foreach. In React you map over arrays and return JSX. The key prop is mandatory and must be unique and stable.

// Razor
<ul>
    @foreach (var item in items)
    {
        <li>@item.Name</li>
    }
</ul>
// React
<ul>
  {items.map((item) => (
    <li key={item.id}>{item.name}</li>
  ))}
</ul>

The key prop is not accessible inside the component (you cannot read props.key). Its only purpose is to let React’s reconciler track which list items have moved, been added, or been removed between renders. Using array index as a key (key={index}) is acceptable only for static, non-reorderable lists — for anything that can change, use a stable ID.

Event Handling: Synthetic Events

React wraps native DOM events in a synthetic event system. The API mirrors the DOM Event interface, so .preventDefault(), .stopPropagation(), and .target all work as expected. The difference from native DOM events is that React pools event objects for performance — in practice this matters only if you access the event asynchronously after the handler returns.

function SearchForm() {
  const [query, setQuery] = useState("");

  function handleSubmit(event: React.FormEvent<HTMLFormElement>) {
    event.preventDefault();          // Same as you'd expect
    console.log("Searching for:", query);
  }

  function handleChange(event: React.ChangeEvent<HTMLInputElement>) {
    setQuery(event.target.value);    // Controlled input pattern
  }

  return (
    <form onSubmit={handleSubmit}>
      <input
        type="text"
        value={query}
        onChange={handleChange}
        placeholder="Search..."
      />
      <button type="submit">Search</button>
    </form>
  );
}

Event handlers in JSX are camelCase: onClick, onChange, onSubmit, onKeyDown. They accept a function reference, not a string — never onClick="handleClick()".

Component Composition

React has no concept of content projection sections the way Blazor does with @ChildContent and @Body. The equivalent mechanism is the children prop.

// Blazor — content projection
// Card.razor
<div class="card">
    @ChildContent
</div>

@code {
    [Parameter] public RenderFragment ChildContent { get; set; }
}
// React — children prop
interface CardProps {
  children: React.ReactNode;
  title?: string;
}

function Card({ children, title }: CardProps) {
  return (
    <div className="card">
      {title && <h3 className="card-title">{title}</h3>}
      <div className="card-body">{children}</div>
    </div>
  );
}

// Usage
<Card title="User Details">
  <p>Name: Chris</p>
  <p>Role: Engineer</p>
</Card>

For more complex slot patterns (equivalent to multiple named @ChildContent sections), pass JSX as named props:

interface LayoutProps {
  sidebar: React.ReactNode;
  main: React.ReactNode;
}

function Layout({ sidebar, main }: LayoutProps) {
  return (
    <div className="layout">
      <aside>{sidebar}</aside>
      <main>{main}</main>
    </div>
  );
}

// Usage
<Layout
  sidebar={<NavigationMenu />}
  main={<ContentArea />}
/>

A Complete Component: Annotated for .NET Engineers

Here is a realistic component combining all the above concepts. Read the comments as the bridge between what you know and what you are learning.

// UserList.tsx
// Equivalent to a Blazor component with [Parameter], foreach, and EventCallback

import { useState } from "react";

// TypeScript interface — equivalent to a C# record or DTO
interface User {
  id: number;
  name: string;
  email: string;
  isActive: boolean;
}

// Props interface — equivalent to [Parameter] declarations in Blazor @code block
interface UserListProps {
  users: User[];
  title?: string;
  onUserSelect: (user: User) => void;   // Equivalent to EventCallback<User>
}

// The component: a function, not a class
// Return type JSX.Element is optional (inferred), but explicit is cleaner
function UserList({ users, title = "Users", onUserSelect }: UserListProps): JSX.Element {
  // useState — equivalent to a private field that triggers StateHasChanged when set
  const [filter, setFilter] = useState<"all" | "active" | "inactive">("all");
  const [selectedId, setSelectedId] = useState<number | null>(null);

  // Derived data — computed from state, no extra hooks needed
  // Equivalent to a computed property in your Blazor component
  const visibleUsers = users.filter((u) => {
    if (filter === "active") return u.isActive;
    if (filter === "inactive") return !u.isActive;
    return true;
  });

  // Event handler — a plain function, not an EventCallback invocation
  function handleSelect(user: User): void {
    setSelectedId(user.id);
    onUserSelect(user);               // "Invoke" the callback — equivalent to EventCallback.InvokeAsync
  }

  // The render output — equivalent to the Razor markup portion of a .razor file
  // Note: this is the return of the function, not a separate template file
  return (
    <div className="user-list">
      <h2>{title}</h2>

      {/* Filter controls — JSX comments use this syntax */}
      <div className="filters">
        {/* onClick receives a function, not a string */}
        <button
          onClick={() => setFilter("all")}
          className={filter === "all" ? "active" : ""}   // className, not class
        >
          All ({users.length})
        </button>
        <button
          onClick={() => setFilter("active")}
          className={filter === "active" ? "active" : ""}
        >
          Active
        </button>
        <button
          onClick={() => setFilter("inactive")}
          className={filter === "inactive" ? "active" : ""}
        >
          Inactive
        </button>
      </div>

      {/* Conditional rendering — ternary, not @if */}
      {visibleUsers.length === 0 ? (
        <p className="empty-state">No users match the current filter.</p>
      ) : (
        // List rendering — .map(), not @foreach
        // key prop is mandatory and must be stable
        <ul className="user-items">
          {visibleUsers.map((user) => (
            <li
              key={user.id}    // Stable unique ID — not the array index
              className={`user-item ${selectedId === user.id ? "selected" : ""}`}
              onClick={() => handleSelect(user)}
            >
              <span className="user-name">{user.name}</span>
              <span className="user-email">{user.email}</span>
              {/* && short-circuit — renders only when true */}
              {user.isActive && (
                <span className="badge active">Active</span>
              )}
            </li>
          ))}
        </ul>
      )}

      <p className="summary">
        Showing {visibleUsers.length} of {users.length} users
      </p>
    </div>
  );
}

export default UserList;
// App.tsx — consuming the component
// Equivalent to placing <UserList> in a parent .razor file

import { useState } from "react";
import UserList from "./UserList";

const SAMPLE_USERS = [
  { id: 1, name: "Alice Chen", email: "alice@example.com", isActive: true },
  { id: 2, name: "Bob Perez", email: "bob@example.com", isActive: false },
  { id: 3, name: "Carol Smith", email: "carol@example.com", isActive: true },
];

function App() {
  const [selectedUser, setSelectedUser] = useState<string | null>(null);

  return (
    <div>
      <UserList
        users={SAMPLE_USERS}
        title="Engineering Team"
        onUserSelect={(user) => setSelectedUser(user.name)}
      />
      {selectedUser && <p>Selected: {selectedUser}</p>}
    </div>
  );
}

export default App;

Key Differences

ConceptBlazor (.NET)React (TypeScript)
Template languageRazor syntax (.razor files, @ prefix)JSX (.tsx files, {} escape)
Component definitionClass or partial class with markupPlain TypeScript function
Component parameters[Parameter] attribute on propertiesProps object (destructured function argument)
HTML attribute for CSS classclassclassName
Event callbacksEventCallback<T>, InvokeAsync()Plain function: (value: T) => void
State that triggers re-renderprivate field + StateHasChanged()useState hook — setter triggers re-render
Two-way binding@bind directiveControlled input: value + onChange
Child content projectionRenderFragment ChildContentchildren: React.ReactNode prop
Named slotsMultiple RenderFragment parametersJSX passed as named props
Iteration@foreach.map() returning JSX with key prop
Conditionals@if, @switchTernary ? :, logical &&, extracted variables
Code-behind separation.razor + .razor.csSingle .tsx file (logic and markup together)
Component lifecycleOnInitialized, OnAfterRender, DisposeuseEffect (Article 3.2)
Global state/servicesDI container, injected servicesContext API, state management libraries

Gotchas for .NET Engineers

Gotcha 1: JSX is not HTML, and attribute casing matters.

Coming from Razor, it is natural to write class="..." and not get immediate feedback that it is wrong — the code may still compile. React will print a console warning and apply no class whatsoever. The corrected form is className. Similarly, tabindex is tabIndex, readonly is readOnly, maxlength is maxLength. The pattern is: DOM attributes that are multi-word become camelCase in JSX. Memorize className and htmlFor; the rest you can look up.

Gotcha 2: Mutation does not trigger re-renders — ever.

In C#, you might do user.Name = "Updated" and call StateHasChanged(). In React, modifying a state variable’s internal properties does nothing to trigger a re-render. React compares state values by reference for objects and arrays. If you mutate in place, the reference does not change, and React sees no update.

// WRONG — mutates in place, React does not see a change
const [user, setUser] = useState({ name: "Alice", age: 30 });

function updateName() {
  user.name = "Bob";  // Mutates the object — no re-render
  setUser(user);      // Same reference — React bails out
}

// CORRECT — create a new object
function updateName() {
  setUser({ ...user, name: "Bob" });  // New object reference — triggers re-render
}

// WRONG — mutating an array
const [items, setItems] = useState([1, 2, 3]);

function addItem() {
  items.push(4);      // Mutates the array — no re-render
  setItems(items);    // Same reference — React bails out
}

// CORRECT — create a new array
function addItem() {
  setItems([...items, 4]);   // New array reference — triggers re-render
}

This is the single most common source of “my state changed but the UI didn’t update” bugs for .NET engineers learning React.

Gotcha 3: && with falsy values renders 0.

The idiom {count && <Thing />} is common in React code. It works when count is a boolean. When count is a number and its value is 0, the expression short-circuits and the value of the expression is 0 — which React renders as the text “0” in the DOM.

// WRONG — renders "0" when items.length is 0
{items.length && <ItemList items={items} />}

// CORRECT — use a boolean explicitly
{items.length > 0 && <ItemList items={items} />}

// ALSO CORRECT — ternary avoids the issue entirely
{items.length > 0 ? <ItemList items={items} /> : null}

Gotcha 4: The component function re-runs on every render — functions are not expensive.

.NET engineers sometimes avoid defining functions inside render because it looks like they are creating new delegate instances on every call. In React, this is normal and expected — the component function body runs on every render, including any nested function definitions. In practice this is not a performance problem; JavaScript function creation is cheap. You optimize only when profiling shows a real bottleneck, typically using useCallback (covered in Article 3.2).

Gotcha 5: Class components exist in the codebase — do not write them, but you need to read them.

Any React codebase older than 2019 likely has class components. You will encounter extends React.Component, render() methods, this.state, this.setState, and lifecycle methods like componentDidMount. These are not wrong, but they are the old model. Do not write new class components. When refactoring, convert to functional components unless the scope is too large to justify it.

// Legacy class component — READ, do not WRITE
class OldCounter extends React.Component<{}, { count: number }> {
  state = { count: 0 };

  componentDidMount() {
    console.log("Mounted — equivalent to OnInitialized");
  }

  render() {
    return (
      <button onClick={() => this.setState({ count: this.state.count + 1 })}>
        Count: {this.state.count}
      </button>
    );
  }
}

// Modern functional equivalent
function Counter() {
  const [count, setCount] = useState(0);
  return (
    <button onClick={() => setCount(count + 1)}>Count: {count}</button>
  );
}

Gotcha 6: Props are read-only — TypeScript may not catch every violation.

React’s contract is that you do not mutate props. TypeScript can enforce this if you mark props readonly, but in practice most codebases do not do this. React will not throw an error if you mutate a prop object’s nested properties, but the behavior is undefined and will produce subtle bugs. Treat props as immutable values. If you need to derive a modified version, copy it.

Hands-On Exercise

Build a filterable data table component from scratch. The component should:

  1. Accept a users prop of type User[], where User has id, name, department, startDate, and isActive.
  2. Render the list as an HTML table with column headers.
  3. Include a text input that filters by name (case-insensitive, as the user types).
  4. Include buttons to filter by department (collect unique departments from the data).
  5. Include a checkbox to show only active users.
  6. Show a count: “Showing X of Y users”.
  7. Emit a onSelect callback when a row is clicked.

Requirements:

  • Use functional components and TypeScript interfaces throughout.
  • All derived data (filtering, counting) computed inline from state — no useEffect for this exercise.
  • No external libraries — this is a JSX and props exercise.
  • Wire it into a parent App component that provides sample data and displays the selected user’s name.

This exercise forces you to handle: controlled inputs, list rendering with keys, conditional rendering, event handlers, and component composition — all in one context.

Quick Reference

JSX Syntax

HTML / RazorJSX
class="..."className="..."
for="..."htmlFor="..."
<br><br />
<input><input />
style="color: red"style={{ color: 'red' }}
<!-- comment -->{/* comment */}
@value{value}
@if (x) { ... }{x ? ... : null} or {x && ...}
@foreach (var x in list){list.map(x => <li key={x.id}>...</li>)}

Component Anatomy

// Import hooks and types at the top
import { useState } from "react";

// Define props interface before the component
interface MyComponentProps {
  requiredProp: string;
  optionalProp?: number;              // ? = optional, like C# optional parameter
  onEvent: (value: string) => void;  // Callback prop — equivalent to EventCallback<string>
}

// Component is a named function, exported at the bottom
function MyComponent({ requiredProp, optionalProp = 0, onEvent }: MyComponentProps) {
  const [localState, setLocalState] = useState<string>("");

  return (
    <div>...</div>
  );
}

export default MyComponent;

.NET to React Concept Map

.NET / BlazorReact
[Parameter] attributeProp in the props interface
EventCallback<T>(value: T) => void function prop
StateHasChanged()State setter from useState
@ChildContent (RenderFragment)children: React.ReactNode
Named RenderFragment slotsJSX passed as named props
@foreacharray.map() with key prop
@if / @switchTernary ? : / && / extracted variable
OnInitialized / DisposeuseEffect (Article 3.2)
@bind (two-way)value + onChange (controlled input)
private field + StateHasChanged()useState — value + setter
Partial class code-behindSame .tsx file — logic above the return

Common React TypeScript Types

Use CaseTypeScript Type
Child elementsReact.ReactNode
Click handlerReact.MouseEvent<HTMLButtonElement>
Input change handlerReact.ChangeEvent<HTMLInputElement>
Form submit handlerReact.FormEvent<HTMLFormElement>
Any JSX elementJSX.Element or React.ReactElement
Ref to DOM elementReact.RefObject<HTMLDivElement>
Style objectReact.CSSProperties

Further Reading

React Hooks: The State Management Model

For .NET engineers who know: INotifyPropertyChanged, OnInitializedAsync, IDisposable, and constructor-injected services You’ll learn: How React’s hook system maps to .NET’s state and lifecycle patterns, where the analogies hold, and where they break in ways that will cost you hours if you do not know about them upfront Time: 20 min read

The .NET Way (What You Already Know)

In Blazor, state management and lifecycle are class-based concerns. A component inherits from ComponentBase, overrides lifecycle methods, declares private fields for state, and calls StateHasChanged() to trigger re-renders. Services arrive via constructor injection (or @inject). Subscriptions created during initialization are cleaned up in Dispose.

// A realistic Blazor component with lifecycle, state, and DI
@inject IUserService UserService
@inject ILogger<UserProfile> Logger
@implements IDisposable

<div>
    @if (_isLoading) {
        <Spinner />
    } else if (_user is not null) {
        <UserCard User="_user" />
    }
</div>

@code {
    [Parameter] public int UserId { get; set; }

    private User? _user;
    private bool _isLoading = true;
    private CancellationTokenSource _cts = new();

    protected override async Task OnInitializedAsync()
    {
        try
        {
            _user = await UserService.GetByIdAsync(UserId, _cts.Token);
        }
        finally
        {
            _isLoading = false;
        }
    }

    protected override async Task OnParametersSetAsync()
    {
        if (UserId != _user?.Id)
        {
            _isLoading = true;
            _user = await UserService.GetByIdAsync(UserId, _cts.Token);
            _isLoading = false;
        }
    }

    public void Dispose()
    {
        _cts.Cancel();
        _cts.Dispose();
    }
}

This is a complete, correct pattern. React hooks achieve the same result with a different mechanism — one that takes about 30 minutes to understand and about 30 days to stop making mistakes with.

The React Way

The Rules of Hooks (Why They Exist)

Before the individual hooks, understand the constraint they operate under. React tracks hooks by call order. Every time the component function runs (every render), React expects the same hooks to be called in the same order. This is why the rules exist:

Rule 1: Call hooks only at the top level. Not inside if statements, loops, or nested functions.

Rule 2: Call hooks only from React function components or other custom hooks. Not from plain utility functions or class components.

// WRONG — conditional hook call
function Profile({ userId, isAdmin }: ProfileProps) {
  if (isAdmin) {
    const [adminState, setAdminState] = useState(null); // Breaks the rule
  }
  // ...
}

// CORRECT — hook called unconditionally, condition is inside
function Profile({ userId, isAdmin }: ProfileProps) {
  const [adminState, setAdminState] = useState<AdminData | null>(null);

  if (isAdmin) {
    // Use adminState here
  }
}

The practical consequence: if you need a hook “only sometimes,” call it unconditionally and ignore its value when you do not need it. The eslint-plugin-react-hooks package enforces these rules at compile time — it should be installed in every project.

useState: The INotifyPropertyChanged Replacement

useState returns the current value and a setter. Calling the setter schedules a re-render. You already understand this from Article 3.1; here is the full picture:

import { useState } from "react";

function OrderForm() {
  // Primitive state — string
  const [customerName, setCustomerName] = useState<string>("");

  // Object state — requires a new object on update (no mutation)
  const [address, setAddress] = useState<Address>({
    street: "",
    city: "",
    postalCode: "",
  });

  // Array state — requires a new array on update
  const [lineItems, setLineItems] = useState<LineItem[]>([]);

  // Boolean state — common for toggles
  const [isSubmitting, setIsSubmitting] = useState(false);

  // Lazy initializer — runs once, useful for expensive initial computation
  // Pass a function, not a value: useState(() => computeInitialState())
  const [cache, setCache] = useState<Map<string, string>>(() => new Map());

  function updateCity(city: string) {
    // Spread to create new object — mutation does not trigger re-render
    setAddress((prev) => ({ ...prev, city }));
    //           ^ Functional update form: prev => next
    //             Prefer this when new state depends on old state
  }

  function addLineItem(item: LineItem) {
    setLineItems((prev) => [...prev, item]);
  }

  function removeLineItem(id: string) {
    setLineItems((prev) => prev.filter((item) => item.id !== id));
  }

  // ...
}

The functional update form (setCount(prev => prev + 1)) is important. If you call multiple setters in the same event handler, React batches them into a single re-render. But if the new state depends on the current state, use the functional form to ensure you are reading the most recent value, not a stale closure (more on this below).

useEffect: OnInitializedAsync + OnParametersSetAsync + Dispose

useEffect is where React’s lifecycle lives. It runs after the component renders and lets you perform side effects: data fetching, subscriptions, timers. Its second argument — the dependency array — controls when it runs.

import { useState, useEffect } from "react";

interface User {
  id: string;
  name: string;
  email: string;
}

function UserProfile({ userId }: { userId: string }) {
  const [user, setUser] = useState<User | null>(null);
  const [isLoading, setIsLoading] = useState(true);
  const [error, setError] = useState<string | null>(null);

  useEffect(() => {
    // This block runs after every render where userId has changed.
    // On first render it always runs (equivalent to OnInitializedAsync).

    let cancelled = false; // Equivalent to CancellationToken

    async function fetchUser() {
      setIsLoading(true);
      setError(null);

      try {
        const response = await fetch(`/api/users/${userId}`);
        if (!response.ok) throw new Error(`HTTP ${response.status}`);
        const data: User = await response.json();

        // Guard against setting state on an unmounted component
        // or after a newer fetch has started (equivalent to CancellationToken check)
        if (!cancelled) {
          setUser(data);
        }
      } catch (err) {
        if (!cancelled) {
          setError(err instanceof Error ? err.message : "Unknown error");
        }
      } finally {
        if (!cancelled) {
          setIsLoading(false);
        }
      }
    }

    fetchUser();

    // Cleanup function — equivalent to IDisposable.Dispose
    // Runs before the next effect execution, and when the component unmounts
    return () => {
      cancelled = true;
    };
  }, [userId]); // Dependency array — re-runs when userId changes
  //  ^^^^^^^^
  //  This is where most bugs live. See Gotchas.

  if (isLoading) return <p>Loading...</p>;
  if (error) return <p>Error: {error}</p>;
  if (!user) return null;

  return (
    <div>
      <h2>{user.name}</h2>
      <p>{user.email}</p>
    </div>
  );
}

The dependency array semantics:

Dependency ArrayWhen Effect Runs
No array: useEffect(() => {})After every render — equivalent to OnAfterRender(true)
Empty array: useEffect(() => {}, [])Once after first render — equivalent to OnInitializedAsync
With values: useEffect(() => {}, [x, y])After first render, then whenever x or y change

React compares dependencies using Object.is — the same as === for primitives, reference equality for objects. If you pass an object or array into the dependency array that is re-created on every render, the effect will run on every render regardless.

useContext: Constructor Injection Without a DI Container

useContext is how you access shared data without prop drilling — the equivalent of resolving a service from the DI container. You define a context (the service contract), provide a value high in the component tree (the service registration), and consume it anywhere below.

// Step 1: Define the context type and create the context
// Equivalent to defining an IAuthService interface and a default stub

interface AuthContext {
  currentUser: User | null;
  login: (credentials: Credentials) => Promise<void>;
  logout: () => void;
  isAuthenticated: boolean;
}

// The argument to createContext is the default value (used when no Provider is found)
const AuthContext = React.createContext<AuthContext>({
  currentUser: null,
  login: async () => {},
  logout: () => {},
  isAuthenticated: false,
});
// Step 2: The Provider component — equivalent to service registration in Program.cs
// Place this high in the tree, typically in App.tsx

function AuthProvider({ children }: { children: React.ReactNode }) {
  const [currentUser, setCurrentUser] = useState<User | null>(null);

  async function login(credentials: Credentials) {
    const user = await authService.login(credentials);
    setCurrentUser(user);
  }

  function logout() {
    setCurrentUser(null);
  }

  // The value object — what components receive when they call useContext(AuthContext)
  const value: AuthContext = {
    currentUser,
    login,
    logout,
    isAuthenticated: currentUser !== null,
  };

  return (
    <AuthContext.Provider value={value}>
      {children}
    </AuthContext.Provider>
  );
}
// Step 3: Consume the context anywhere below the Provider
// Equivalent to constructor injection — the component does not need to know where the data comes from

function NavBar() {
  const { currentUser, logout, isAuthenticated } = useContext(AuthContext);

  return (
    <nav>
      {isAuthenticated ? (
        <>
          <span>Welcome, {currentUser?.name}</span>
          <button onClick={logout}>Log out</button>
        </>
      ) : (
        <a href="/login">Log in</a>
      )}
    </nav>
  );
}

A critical difference from DI: context does not give you constructor injection. Every component that calls useContext(SomeContext) re-renders whenever the context value changes. If you put everything in one context, a change to any piece of that data re-renders all consumers. Split contexts by update frequency — a ThemeContext that rarely changes is separate from AuthContext which may change more often.

useRef: Mutable Values and Direct DOM Access

useRef returns a mutable object with a .current property. Mutating .current does not trigger a re-render. This has two use cases:

1. Accessing DOM elements directly (equivalent to ElementReference in Blazor or getElementById in traditional JS):

function VideoPlayer({ src }: { src: string }) {
  const videoRef = useRef<HTMLVideoElement>(null);

  function play() {
    videoRef.current?.play(); // Direct DOM access — equivalent to ElementReference.FocusAsync()
  }

  function pause() {
    videoRef.current?.pause();
  }

  return (
    <div>
      <video ref={videoRef} src={src} />
      <button onClick={play}>Play</button>
      <button onClick={pause}>Pause</button>
    </div>
  );
}

2. Storing mutable values that should not trigger re-renders (equivalent to a private field that is not part of component state):

function SearchInput({ onSearch }: { onSearch: (query: string) => void }) {
  const [inputValue, setInputValue] = useState("");
  const debounceRef = useRef<ReturnType<typeof setTimeout> | null>(null);

  function handleChange(event: React.ChangeEvent<HTMLInputElement>) {
    const value = event.target.value;
    setInputValue(value);

    // Debounce — clear previous timer, set new one
    // debounceRef.current is mutable without triggering a re-render
    if (debounceRef.current) {
      clearTimeout(debounceRef.current);
    }
    debounceRef.current = setTimeout(() => {
      onSearch(value);
    }, 300);
  }

  return (
    <input
      type="text"
      value={inputValue}
      onChange={handleChange}
      placeholder="Search..."
    />
  );
}

A common use of useRef in combination with useEffect is storing the previous value of a prop or state for comparison:

function usePrevious<T>(value: T): T | undefined {
  const ref = useRef<T | undefined>(undefined);

  useEffect(() => {
    ref.current = value;
  }); // No dependency array — runs after every render

  return ref.current; // Returns previous value (before current render)
}

useMemo and useCallback: Referential Stability

These two hooks are about performance and referential stability, not correctness. Understanding when to use them requires understanding why React re-renders.

When a component re-renders, every value defined in the function body is re-created. For primitives this is irrelevant — 5 === 5. For objects and functions this matters: {} !== {} and () => {} !== () => {}. If an object or function is passed as a prop or dependency, its new reference causes the child or effect to re-run even if the actual data has not changed.

useMemo memoizes a computed value. Use it when a computation is expensive, or when you need a stable object reference:

function OrderSummary({ orders }: { orders: Order[] }) {
  // Without useMemo: this runs on every render
  // With useMemo: only runs when orders changes
  const totals = useMemo(() => {
    return {
      subtotal: orders.reduce((sum, o) => sum + o.amount, 0),
      count: orders.length,
      average: orders.length > 0
        ? orders.reduce((sum, o) => sum + o.amount, 0) / orders.length
        : 0,
    };
    // Equivalent to a computed property backed by Lazy<T> with dependency tracking
  }, [orders]); // Recalculate only when orders changes

  return (
    <div>
      <p>Orders: {totals.count}</p>
      <p>Subtotal: ${totals.subtotal.toFixed(2)}</p>
      <p>Average: ${totals.average.toFixed(2)}</p>
    </div>
  );
}

useCallback memoizes a function reference. Use it when passing a callback to a child component that is optimized with React.memo, or when the function is a useEffect dependency:

function UserList({ onDelete }: { onDelete: (id: string) => Promise<void> }) {
  const [users, setUsers] = useState<User[]>([]);

  // Without useCallback: new function reference every render
  // With useCallback: same reference when userId hasn't changed
  const handleDelete = useCallback(
    async (userId: string) => {
      await onDelete(userId);
      setUsers((prev) => prev.filter((u) => u.id !== userId));
    },
    [onDelete] // Only re-create if onDelete changes
  );

  return (
    <ul>
      {users.map((user) => (
        <UserRow
          key={user.id}
          user={user}
          onDelete={handleDelete}
        />
      ))}
    </ul>
  );
}

The honest guidance: do not reach for useMemo and useCallback by default. Add them when you can measure a real performance problem or when you have a useEffect that needs a stable function dependency. Premature memoization adds cognitive overhead without measurable benefit in most components.

Custom Hooks: Reusable Logic as Services

Custom hooks extract stateful logic into reusable functions — the closest equivalent to writing a small service class or using a mixin. Any function that calls other hooks is a custom hook; by convention its name starts with use.

// useFetch — equivalent to a generic IDataService<T>
// Encapsulates the loading/error/data pattern that would otherwise be repeated in every component

interface FetchState<T> {
  data: T | null;
  isLoading: boolean;
  error: string | null;
  refetch: () => void;
}

function useFetch<T>(url: string): FetchState<T> {
  const [data, setData] = useState<T | null>(null);
  const [isLoading, setIsLoading] = useState(true);
  const [error, setError] = useState<string | null>(null);
  const [refetchTrigger, setRefetchTrigger] = useState(0);

  useEffect(() => {
    let cancelled = false;

    async function fetchData() {
      setIsLoading(true);
      setError(null);

      try {
        const response = await fetch(url);
        if (!response.ok) throw new Error(`HTTP ${response.status}`);
        const json: T = await response.json();
        if (!cancelled) setData(json);
      } catch (err) {
        if (!cancelled) {
          setError(err instanceof Error ? err.message : "Request failed");
        }
      } finally {
        if (!cancelled) setIsLoading(false);
      }
    }

    fetchData();
    return () => { cancelled = true; };
  }, [url, refetchTrigger]); // Re-fetch when URL changes or refetch() is called

  const refetch = useCallback(() => {
    setRefetchTrigger((n) => n + 1);
  }, []);

  return { data, isLoading, error, refetch };
}
// A more focused custom hook — useLocalStorage
// Equivalent to a service that wraps a storage mechanism

function useLocalStorage<T>(key: string, initialValue: T): [T, (value: T) => void] {
  const [storedValue, setStoredValue] = useState<T>(() => {
    try {
      const item = window.localStorage.getItem(key);
      return item ? (JSON.parse(item) as T) : initialValue;
    } catch {
      return initialValue;
    }
  });

  function setValue(value: T) {
    try {
      setStoredValue(value);
      window.localStorage.setItem(key, JSON.stringify(value));
    } catch (err) {
      console.error("useLocalStorage write failed:", err);
    }
  }

  return [storedValue, setValue];
}

// Usage
function SettingsPanel() {
  const [theme, setTheme] = useLocalStorage<"light" | "dark">("theme", "light");

  return (
    <button onClick={() => setTheme(theme === "light" ? "dark" : "light")}>
      Switch to {theme === "light" ? "dark" : "light"} mode
    </button>
  );
}
// Consuming the useFetch hook — clean component, no lifecycle boilerplate
function UserProfile({ userId }: { userId: string }) {
  const { data: user, isLoading, error, refetch } = useFetch<User>(`/api/users/${userId}`);

  if (isLoading) return <Spinner />;
  if (error) return <ErrorMessage message={error} onRetry={refetch} />;
  if (!user) return null;

  return (
    <div>
      <h2>{user.name}</h2>
      <p>{user.email}</p>
      <button onClick={refetch}>Refresh</button>
    </div>
  );
}

Custom hooks compose the same way utility classes compose in .NET. A useOrderForm hook might call useFetch, useLocalStorage, and useState internally. The consuming component never sees the implementation details.

When to Reach for a State Management Library

useState and useContext cover most needs. Reach for an external state management library when:

  • State needs to be shared across many components that are not in a natural parent-child relationship
  • You need time-travel debugging or state snapshots
  • State updates involve complex business logic that benefits from testable reducers
  • You are building an application where global state changes frequently and performance is a concern

The current landscape:

Library.NET analogBest for
ZustandA thread-safe singleton service with eventsSimple global state — low boilerplate
Redux ToolkitA full CQRS/Event Sourcing setupComplex state machines, enterprise apps
Jotai / RecoilObservable properties with fine-grained reactivityGranular subscriptions, avoiding over-rendering
React Query / TanStack QueryA smart HttpClient + IMemoryCache combinedServer state (fetching, caching, syncing)

TanStack Query deserves special mention: if your state management problem is primarily “fetch data, cache it, keep it fresh,” TanStack Query solves it more completely than anything you can build with useEffect. It handles loading states, error states, cache invalidation, background refetching, and deduplication. The useFetch hook in the example above is a simplified version of what TanStack Query provides out of the box.

Key Differences

ConceptBlazor (.NET)React Hooks
State declarationprivate T _field + StateHasChanged()const [value, setValue] = useState<T>(initial)
Initialization (once)OnInitializedAsync() overrideuseEffect(() => { ... }, [])
Prop change responseOnParametersSetAsync() overrideuseEffect(() => { ... }, [prop])
Cleanup / disposalIDisposable.Dispose()Return function from useEffect callback
Service/dependency access@inject / constructoruseContext(SomeContext)
Mutable non-state fieldPrivate field (no special syntax)useRefref.current is mutable
Derived/computed valuesC# computed property (get { return ... })useMemo(() => compute(), [deps])
Stable callback referenceFunc<T> field (allocated once)useCallback(() => fn(), [deps])
Reusable stateful logicService class injected via DICustom hook (function prefixed with use)
Global shared stateDI container (singleton service)useContext + Provider, or Zustand/Redux
Conditional lifecycleOverride method with condition insideHook called unconditionally, condition inside

Gotchas for .NET Engineers

Gotcha 1: Stale closures — the most common and most confusing React bug.

When a function defined inside a component captures a state value, it captures the value at the time the function was created. If state updates and the function is not re-created, it reads the old value. This is a JavaScript closure, not a React-specific concept — but React’s hook model makes it especially common.

function Counter() {
  const [count, setCount] = useState(0);

  useEffect(() => {
    const interval = setInterval(() => {
      // STALE CLOSURE: count is always 0 here
      // because this function captured count=0 at mount time
      // and the empty dependency array means the effect never re-runs
      console.log("count is:", count); // Always logs 0
      setCount(count + 1);             // Always sets to 0+1=1, not incrementing
    }, 1000);

    return () => clearInterval(interval);
  }, []); // <-- The problem: count is a dependency, but omitted

  return <p>{count}</p>;
}

There are two correct solutions:

// Solution A: Add count to the dependency array
// (re-creates the interval every time count changes — works but not ideal for intervals)
useEffect(() => {
  const interval = setInterval(() => {
    setCount(count + 1); // Now reads current count
  }, 1000);
  return () => clearInterval(interval);
}, [count]);

// Solution B: Use the functional update form — preferred for intervals/counters
// The setter's callback always receives the current state value
useEffect(() => {
  const interval = setInterval(() => {
    setCount((prev) => prev + 1); // prev is always current — no closure issue
  }, 1000);
  return () => clearInterval(interval);
}, []); // Now correctly empty: no dependencies

The rule: if your useEffect uses a value from component scope (props, state, other variables), that value must be in the dependency array unless you are using the functional update form to avoid needing it.

Gotcha 2: Infinite re-render loops from useEffect dependencies.

If you create an object or array inside a component and pass it as a useEffect dependency, you get an infinite loop. The object is re-created on every render, the effect sees a “changed” dependency, runs and updates state, which triggers a render, which creates a new object…

// WRONG — infinite loop
function UserDashboard({ userId }: { userId: string }) {
  const options = { userId, timestamp: Date.now() }; // New object every render

  useEffect(() => {
    fetchDashboard(options); // options changes every render -> infinite loop
    // eslint-disable-next-line react-hooks/exhaustive-deps
  }, [options]); // <- options is referentially new every render
}

// CORRECT — depend on primitives, not objects
function UserDashboard({ userId }: { userId: string }) {
  useEffect(() => {
    fetchDashboard({ userId, timestamp: Date.now() }); // Object created inside effect
  }, [userId]); // Primitive dependency — only re-runs when userId string changes
}

The fix is almost always: depend on primitives (strings, numbers, booleans), move object creation inside the effect, or stabilize the object with useMemo.

Gotcha 3: useEffect is not a lifecycle method — it is a synchronization mechanism.

.NET engineers tend to reach for useEffect for any logic that “runs when something happens.” The better mental model is: useEffect synchronizes your component with something external (an API, a subscription, a DOM measurement) whenever its dependencies change. It is not a general event handler.

Specifically: do not use useEffect to synchronize state with other state. If you need to compute B from A, compute it during render, not in an effect.

// WRONG — using useEffect to derive state from state
// Creates a render → effect → setState → render loop
function FullName({ firstName, lastName }: FullNameProps) {
  const [fullName, setFullName] = useState("");

  useEffect(() => {
    setFullName(`${firstName} ${lastName}`);
  }, [firstName, lastName]);

  return <p>{fullName}</p>;
}

// CORRECT — compute during render
function FullName({ firstName, lastName }: FullNameProps) {
  const fullName = `${firstName} ${lastName}`; // Derived value, no hook needed
  return <p>{fullName}</p>;
}

A useful heuristic: if your useEffect contains only setState calls and no async operations, subscriptions, or external interactions, you almost certainly do not need useEffect.

Gotcha 4: State updates are asynchronous and batched — do not read state immediately after setting it.

setState schedules a re-render; it does not mutate the current value immediately. Reading the state variable on the next line gives you the old value.

// WRONG — reads stale value
function Form() {
  const [name, setName] = useState("");

  function handleSubmit() {
    setName("Alice");
    console.log(name); // Logs "" — the state has not updated yet
    submitToApi(name);  // Submits "" — wrong
  }
}

// CORRECT — use the value you set, not the state variable
function Form() {
  const [name, setName] = useState("");

  function handleSubmit() {
    const newName = "Alice";
    setName(newName);
    submitToApi(newName); // Uses the value directly — correct
  }
}

This is not like C#’s INotifyPropertyChanged where a property write is synchronous. The state update is a request to React to re-render with the new value.

Gotcha 5: Missing useEffect cleanup causes memory leaks and race conditions.

If your effect creates a subscription or starts an async operation, it must clean up. Without cleanup, the component can attempt to update state after it has unmounted, producing React’s “Can’t perform a React state update on an unmounted component” warning — and in some cases, actual memory leaks.

// WRONG — no cleanup
useEffect(() => {
  const subscription = eventBus.subscribe("user-updated", handleUpdate);
  // Component unmounts, but subscription lives on indefinitely
}, []);

// CORRECT — cleanup function returned
useEffect(() => {
  const subscription = eventBus.subscribe("user-updated", handleUpdate);
  return () => {
    subscription.unsubscribe(); // Called on unmount and before next effect run
  };
}, []);

For async operations, use the cancellation flag pattern shown in the useFetch example above. There is no native CancellationToken in the browser — the manual boolean flag is the equivalent.

Gotcha 6: useContext re-renders all consumers on any context value change.

If you put multiple unrelated values in one context, any change to any value re-renders every consumer. Split contexts by update domain, or use a state management library for frequently-changing global state.

// WRONG — one context for everything
// A user.name change re-renders every component consuming this context,
// including components that only care about the theme
const AppContext = React.createContext<{
  user: User;
  theme: Theme;
  notifications: Notification[];
  // ...
}>(null!);

// CORRECT — separate contexts by update frequency
const UserContext = React.createContext<User | null>(null);
const ThemeContext = React.createContext<Theme>("light");
const NotificationsContext = React.createContext<Notification[]>([]);

Hands-On Exercise

Build a useDataTable custom hook that encapsulates the full state management for a paginated, sortable data table. The hook should:

  1. Accept a fetchFn: (params: TableParams) => Promise<PagedResult<T>> and an initialParams argument.
  2. Expose: data, isLoading, error, currentPage, totalPages, sortColumn, sortDirection.
  3. Expose actions: setPage(n), setSort(column, direction), refresh().
  4. Use useEffect to fetch data whenever page or sort changes.
  5. Handle the stale-request problem (a slow earlier request should not overwrite results from a faster later request).

Then build a UserTable component that consumes the hook and renders a sortable, paginated table of users from a mock API. Wire a useDebounce custom hook to the search input so that fetch calls are debounced to 300ms.

This exercise forces you to compose multiple hooks, handle cleanup correctly, manage derived state without redundant effects, and separate presentation from data-fetching logic — the same separation you would achieve with a repository pattern in .NET.

Quick Reference

Hook Selection Guide

NeedHook
Local component stateuseState
Side effects, data fetching, subscriptionsuseEffect
Access shared/global data (no prop drilling)useContext
Mutable value without re-render, DOM referenceuseRef
Memoize expensive computed valueuseMemo
Stabilize function reference across rendersuseCallback
Reusable stateful logicCustom hook (use prefix)
Complex state with many sub-valuesuseReducer (not covered here — equivalent to a Flux reducer)

useEffect Dependency Array Rules

ScenarioArrayExample
Run once on mount[]useEffect(() => { init(); }, [])
Run on mount + when x changes[x]useEffect(() => { fetch(x); }, [x])
Run after every render(omit array)useEffect(() => { log(); })
Cleanup on unmountReturn function from [] effectreturn () => { cleanup(); }

Stale Closure Checklist

When a useEffect or event handler is reading a stale value, check:

  1. Is the value used inside the effect listed in the dependency array?
  2. If the value is a function or object, is it stabilized with useCallback/useMemo?
  3. Can you use the functional update form (setState(prev => ...)) to avoid needing the current value as a dependency?

.NET to React Hooks Map

Blazor / .NETReact HookNotes
private T _field + StateHasChanged()useState<T>Setter triggers re-render
OnInitializedAsync()useEffect(() => {}, [])Empty array = once on mount
OnParametersSetAsync()useEffect(() => {}, [prop])Runs when prop changes
OnAfterRender()useEffect(() => {})No array = after every render
IDisposable.Dispose()return () => {} inside useEffectReturned cleanup function
@inject IService ServiceuseContext(ServiceContext)Requires Provider ancestor
Private mutable field (not UI state)useRef.current mutation safe
Computed property (get { return ... })Inline computation or useMemoPrefer inline; use useMemo for expensive ops
Helper/utility classCustom hookCompose hooks, prefix with use
Singleton serviceContext + ProviderOr Zustand for complex cases
CancellationTokenBoolean flag in useEffect closurelet cancelled = false pattern
Task.WhenAllPromise.all([...])Inside useEffect async function

Common Hook Errors and Fixes

ErrorCauseFix
UI doesn’t update after state changeMutated state directlyUse spread: setState({...prev, field: value})
useEffect runs on every renderObject/array dependency re-created each renderDepend on primitives; move object inside effect
Stale value in interval/callbackMissing dependency in arrayAdd dependency or use functional update form
State update after unmount warningNo cleanup on async effectUse cancellation flag, return cleanup function
Infinite render loopEffect updates state that is also a dependencyDerive value during render instead of using effect
Hook called conditionallyHook inside if or loopMove hook to top level, use condition inside

Further Reading

Vue 3 Composition API for .NET Engineers

For .NET engineers who know: C#, Blazor or Razor Pages, WPF data binding, MVVM You’ll learn: How Vue 3’s Composition API maps to the reactive, component-driven patterns you already use in Blazor and WPF, and how to write typed, testable Vue components with TypeScript Time: 15-20 min read


The .NET Way (What You Already Know)

In WPF and Blazor, the reactive UI story is built around two core ideas: observable state and declarative markup. In WPF, you implement INotifyPropertyChanged to make a property observable, bind it to a XAML element, and the framework updates the view when the property changes. In Blazor, you call StateHasChanged() or let Blazor’s component lifecycle manage re-renders automatically.

A Blazor component binds logic and markup in the same file with @code {} blocks:

<!-- Blazor: Counter.razor -->
@page "/counter"

<h1>Count: @count</h1>
<button @onclick="Increment">Increment</button>

@code {
    private int count = 0;

    private void Increment()
    {
        count++;
        // Blazor triggers re-render automatically after event handlers
    }
}

In WPF with MVVM, you separate the ViewModel from the View, but the pattern is conceptually the same: a property notifies the UI when it changes, the UI re-renders that specific region:

// WPF ViewModel
public class CounterViewModel : INotifyPropertyChanged
{
    private int _count;
    public int Count
    {
        get => _count;
        set { _count = value; OnPropertyChanged(); }
    }

    public ICommand IncrementCommand => new RelayCommand(() => Count++);

    public event PropertyChangedEventHandler PropertyChanged;
    protected void OnPropertyChanged([CallerMemberName] string name = null)
        => PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(name));
}

The key patterns you already know:

  • Observable state — a property that triggers UI updates when it changes
  • Computed/derived values — properties derived from other state, like FullName = FirstName + " " + LastName
  • Event handlers — methods that respond to user input
  • Lifecycle hooksOnInitialized, OnParametersSet, Dispose
  • Declarative markup — XAML or Razor syntax that describes what to render, not how

Vue 3’s Composition API maps directly onto every one of these patterns.


The Vue 3 Way

Single File Components (SFC)

Vue’s equivalent of a .razor file is a .vue Single File Component. It combines template (markup), script (logic), and styles in one file — just like Blazor combines Razor markup and @code {} in one file.

Counter.vue
├── <template>   ← like @Page + HTML markup in .razor
├── <script setup>  ← like @code {} in Blazor
└── <style scoped>  ← like component-scoped CSS

The modern way to write a Vue SFC uses <script setup> — a compile-time macro that eliminates boilerplate. Here is the exact counter from Blazor, written in Vue:

<!-- Counter.vue -->
<script setup lang="ts">
import { ref } from 'vue'

const count = ref(0)

function increment() {
  count.value++
}
</script>

<template>
  <h1>Count: {{ count }}</h1>
  <button @click="increment">Increment</button>
</template>

<style scoped>
button { padding: 0.5rem 1rem; }
</style>

This is a complete, runnable component. Notice:

  • ref(0) creates a reactive integer — equivalent to a Blazor field with StateHasChanged() wired up
  • In the <template>, you access count directly (Vue unwraps the ref). In <script>, you use count.value
  • @click is Vue’s event binding — the same as Blazor’s @onclick
  • {{ count }} is interpolation — the same as @count in Razor

ref() and reactive(): Observable State

Vue has two primitives for reactive state: ref() and reactive().

ref() wraps a single value (primitive or object). Think of it as a box with a .value property that Vue watches for changes:

import { ref } from 'vue'

// Primitives
const name = ref<string>('')        // like: private string _name = "";
const age = ref<number>(0)          // like: private int _age = 0;
const isActive = ref<boolean>(false) // like: private bool _isActive = false;

// Objects — ref wraps the whole object
const user = ref<{ id: number; name: string } | null>(null)

// Reading: use .value in <script>
console.log(name.value) // ''

// Writing: assign to .value
name.value = 'Alice'
// Vue automatically re-renders any template that uses {{ name }}

reactive() works on objects only and makes every property individually observable — closer to how WPF’s INotifyPropertyChanged works on a class:

import { reactive } from 'vue'

const form = reactive({
  firstName: '',
  lastName: '',
  email: ''
})

// Access directly — no .value needed
form.firstName = 'Alice'
console.log(form.firstName) // 'Alice'

The practical rule: use ref() for most things (primitives, API results, flags). Use reactive() when you have a cohesive object like a form model and want to avoid .value everywhere.

Concept mapping:

WPF / BlazorVue 3
INotifyPropertyChanged propertyref()
Observable class (Fody, MVVM Toolkit)reactive()
StateHasChanged()Called automatically — no manual trigger
@bind-Value (Blazor)v-model

computed(): Derived/Calculated Properties

In C#, you write calculated properties with a getter:

public string FullName => $"{FirstName} {LastName}";
public bool IsFormValid => !string.IsNullOrWhiteSpace(Email) && Email.Contains('@');

In Vue, computed() does exactly the same thing. It re-evaluates only when its dependencies change (Vue tracks which refs you read inside the function):

import { ref, computed } from 'vue'

const firstName = ref('Alice')
const lastName = ref('Smith')

// Cached. Only re-evaluates when firstName or lastName changes.
const fullName = computed(() => `${firstName.value} ${lastName.value}`)

// In template: {{ fullName }} — no .value needed for computed in templates

Writable computed (like a C# property with a set):

const _count = ref(0)

const count = computed({
  get: () => _count.value,
  set: (val: number) => { _count.value = Math.max(0, val) } // enforce minimum
})

// Now you can do: count.value = -5 → clamped to 0

watch() and watchEffect(): Reacting to State Changes

In Blazor, OnParametersSet and OnAfterRender let you react to state changes. In WPF, PropertyChanged fires callbacks. Vue’s equivalents are watch() and watchEffect().

watch() watches a specific ref and fires a callback when it changes — equivalent to PropertyChanged on a specific property:

import { ref, watch } from 'vue'

const searchQuery = ref('')
const results = ref<string[]>([])

// Fires when searchQuery changes. Receives new and old values.
watch(searchQuery, async (newQuery, oldQuery) => {
  if (newQuery.length < 2) {
    results.value = []
    return
  }
  results.value = await fetchResults(newQuery)
})

// Watch with options — like debouncing in .NET
watch(searchQuery, handler, {
  immediate: true,  // run immediately on mount, like OnInitialized
  deep: true,       // watch nested object properties
})

// Watch multiple sources at once
watch([firstName, lastName], ([newFirst, newLast]) => {
  console.log(`Name changed to ${newFirst} ${newLast}`)
})

watchEffect() is more automatic — it runs immediately and re-runs whenever any reactive value it reads changes. You do not declare what to watch; Vue infers it:

import { ref, watchEffect } from 'vue'

const userId = ref(1)
const userData = ref<User | null>(null)

// Runs on mount and whenever userId.value changes
watchEffect(async () => {
  userData.value = await fetchUser(userId.value) // Vue sees userId.value being read
})

Think of watchEffect as a self-registering observer: it subscribes itself to every reactive value it touches during execution.

Lifecycle Hooks

Blazor’s component lifecycle maps cleanly to Vue’s hooks:

import { onMounted, onUpdated, onUnmounted, onBeforeMount } from 'vue'

// onMounted = Blazor's OnAfterRenderAsync(firstRender: true) = WPF's Loaded
onMounted(async () => {
  userData.value = await fetchUser(userId.value)
})

// onUnmounted = IDisposable.Dispose() / OnDetachedFromVisualTree
onUnmounted(() => {
  clearInterval(timer)
  subscription.unsubscribe()
})

// onUpdated — runs after every DOM update (rare, use with caution)
onUpdated(() => {
  // Equivalent to OnAfterRenderAsync(firstRender: false)
})

All lifecycle hooks must be called during <script setup> execution (not inside a callback or conditional), just as Blazor lifecycle methods are defined at the component class level.

Lifecycle mapping:

BlazorWPFVue 3
OnInitializedAsyncConstructorsetup (the <script setup> block itself)
OnAfterRenderAsync(true)LoadedonMounted
OnParametersSetOnPropertyChangedwatch on props
OnAfterRenderAsync(false)LayoutUpdatedonUpdated
IDisposable.DisposeUnloadedonUnmounted

Template Syntax: Directives

Vue’s template directives map directly to Razor tag helpers and XAML attributes:

<script setup lang="ts">
import { ref, computed } from 'vue'

const isLoggedIn = ref(true)
const userRole = ref<'admin' | 'user'>('user')
const items = ref(['Alpha', 'Beta', 'Gamma'])
const inputValue = ref('')
const cssClass = ref('active')
const imageUrl = ref('/logo.png')
</script>

<template>
  <!-- v-if / v-else-if / v-else = @if / else in Razor, Visibility in WPF -->
  <div v-if="isLoggedIn && userRole === 'admin'">Admin panel</div>
  <div v-else-if="isLoggedIn">User dashboard</div>
  <div v-else>Please log in</div>

  <!-- v-for = @foreach in Razor, ItemsSource in WPF -->
  <!-- :key is required — like React's key prop. Helps Vue track list items. -->
  <ul>
    <li v-for="(item, index) in items" :key="item">
      {{ index + 1 }}. {{ item }}
    </li>
  </ul>

  <!-- v-model = two-way binding. @bind-Value in Blazor, {Binding Mode=TwoWay} in WPF -->
  <input v-model="inputValue" type="text" />
  <p>You typed: {{ inputValue }}</p>

  <!-- v-bind (shorthand: colon) = binding an attribute to an expression -->
  <!-- : means "this is an expression, not a string literal" -->
  <div :class="cssClass">Styled div</div>
  <img :src="imageUrl" :alt="'Logo'" />

  <!-- v-on (shorthand: @) = event binding. @onclick in Blazor -->
  <button @click="() => items.push('Delta')">Add item</button>
  <input @keyup.enter="() => console.log('Enter pressed')" />

  <!-- v-show = visibility toggle. Does NOT remove from DOM (like WPF Visibility.Hidden) -->
  <!-- v-if REMOVES the element. v-show just hides it with display:none -->
  <p v-show="isLoggedIn">Visible but always rendered</p>
</template>

Props and Emits: Component Communication

In Blazor, a component receives data via [Parameter] attributes and communicates back via EventCallback<T>. In Vue, these are defineProps and defineEmits.

<!-- UserCard.vue — child component -->
<script setup lang="ts">
// defineProps: equivalent to [Parameter] in Blazor
const props = defineProps<{
  userId: number
  displayName: string
  isEditable?: boolean  // optional — like [Parameter] with a default
}>()

// defineEmits: equivalent to EventCallback<T> in Blazor
const emit = defineEmits<{
  'user-deleted': [id: number]
  'display-name-changed': [id: number, newName: string]
}>()

function handleDelete() {
  // Equivalent to: await OnDeleted.InvokeAsync(props.userId)
  emit('user-deleted', props.userId)
}

function handleRename(newName: string) {
  emit('display-name-changed', props.userId, newName)
}
</script>

<template>
  <div class="user-card">
    <h3>{{ displayName }}</h3>
    <button v-if="isEditable" @click="handleDelete">Delete</button>
  </div>
</template>

Consuming that component from a parent:

<!-- ParentPage.vue -->
<script setup lang="ts">
import UserCard from './UserCard.vue'
import { ref } from 'vue'

const users = ref([
  { id: 1, name: 'Alice' },
  { id: 2, name: 'Bob' },
])

function onUserDeleted(id: number) {
  users.value = users.value.filter(u => u.id !== id)
}

function onNameChanged(id: number, newName: string) {
  const user = users.value.find(u => u.id === id)
  if (user) user.name = newName
}
</script>

<template>
  <UserCard
    v-for="user in users"
    :key="user.id"
    :userId="user.id"
    :displayName="user.name"
    :isEditable="true"
    @user-deleted="onUserDeleted"
    @display-name-changed="onNameChanged"
  />
</template>

Props with defaults use withDefaults:

const props = withDefaults(defineProps<{
  pageSize?: number
  sortOrder?: 'asc' | 'desc'
}>(), {
  pageSize: 20,
  sortOrder: 'asc'
})

Side-by-Side: Vue 3 vs React

This table and example show the same component written in both frameworks. If you have already read the React article, this will orient you quickly.

// React — SearchBox.tsx
import { useState, useEffect, useMemo } from 'react'

interface Props {
  placeholder?: string
  onSearch: (query: string) => void
}

export function SearchBox({ placeholder = 'Search...', onSearch }: Props) {
  const [query, setQuery] = useState('')
  const [results, setResults] = useState<string[]>([])

  const trimmedQuery = useMemo(() => query.trim(), [query])

  useEffect(() => {
    if (!trimmedQuery) { setResults([]); return }
    fetchResults(trimmedQuery).then(setResults)
  }, [trimmedQuery])

  return (
    <div>
      <input
        value={query}
        onChange={e => setQuery(e.target.value)}
        placeholder={placeholder}
      />
      <ul>
        {results.map(r => <li key={r}>{r}</li>)}
      </ul>
    </div>
  )
}
<!-- Vue 3 — SearchBox.vue -->
<script setup lang="ts">
import { ref, computed, watch } from 'vue'

const props = withDefaults(defineProps<{
  placeholder?: string
}>(), { placeholder: 'Search...' })

const emit = defineEmits<{ search: [query: string] }>()

const query = ref('')
const results = ref<string[]>([])

const trimmedQuery = computed(() => query.value.trim())

watch(trimmedQuery, async (newQuery) => {
  if (!newQuery) { results.value = []; return }
  results.value = await fetchResults(newQuery)
})
</script>

<template>
  <div>
    <input v-model="query" :placeholder="placeholder" />
    <ul>
      <li v-for="result in results" :key="result">{{ result }}</li>
    </ul>
  </div>
</template>

Key differences:

FeatureReactVue 3
StateuseState hookref() / reactive()
Derived stateuseMemocomputed()
Side effectsuseEffect with deps arraywatch() / watchEffect()
Two-way bindingControlled input (value + onChange)v-model
TemplateJSX (TypeScript in markup)<template> (HTML-like)
Event syntaxonClick={handler}@click="handler"
Prop binding<Comp value={expr} /><Comp :value="expr" />
Conditional{condition && <El />}v-if="condition"
Lists.map() with JSXv-for directive

Vue’s template syntax feels closer to Razor. React’s JSX feels more like C# with embedded markup. Neither is strictly better — they reflect different priorities.


Key Differences

The <script setup> Compilation Model

<script setup> is not just syntax sugar. The Vue compiler transforms it at build time. Everything declared at the top level of <script setup> is automatically available in <template>. There is no explicit return {} or export default { setup() {} }. This is a compile-time feature, not runtime behavior.

Reactivity Is Proxy-Based

Vue 3 uses JavaScript Proxy objects under the hood. When you access reactive() object properties or .value on a ref(), Vue intercepts those reads and tracks which components depend on them. When you write, Vue knows exactly which components to re-render. You never call StateHasChanged() or invoke a command pattern — the tracking is automatic.

Two-Way Binding Is First-Class

React deliberately removed two-way binding because it creates implicit data flow that is hard to trace. Vue kept it because the ergonomics are good for form-heavy UIs. v-model on an input is equivalent to :value="x" @input="x = $event.target.value". On a component, v-model passes a modelValue prop and listens for an update:modelValue event.

Styles Are Scoped by Default (With :scoped)

Adding <style scoped> makes all CSS rules apply only to elements in that component’s template. Vue injects a unique data attribute (e.g., data-v-f3f3eg9) and rewrites CSS selectors to match. This is equivalent to CSS Modules but works without any configuration.


Gotchas for .NET Engineers

Gotcha 1: .value inside script, none in template

The single most common mistake when learning Vue. In <script setup>, every ref() variable requires .value to read or write:

const count = ref(0)
count.value++        // correct in <script>
count++              // wrong — you are incrementing the ref object, not the number

But in <template>, Vue automatically unwraps refs:

<!-- Correct in template -->
{{ count }}

<!-- Wrong — double unwrap, returns the number not a string -->
{{ count.value }}

If you see [object Object] in your template where you expect a number, you are probably printing count when count is a non-ref object, or you have an extra .value somewhere.

Gotcha 2: Destructuring a reactive() object breaks reactivity

This trips up C# engineers who think of var { firstName, lastName } = form as harmless:

const form = reactive({ firstName: 'Alice', lastName: 'Smith' })

// WRONG — these are plain strings now, not reactive
const { firstName, lastName } = form

// CORRECT — use toRefs() to preserve reactivity when destructuring
import { toRefs } from 'vue'
const { firstName, lastName } = toRefs(form)
// firstName.value and lastName.value are now reactive refs

The same problem does not occur with ref() objects because you read through .value, which Vue intercepts at the proxy level.

Gotcha 3: watch does not run immediately by default

In Blazor, OnParametersSet runs every time parameters change — including the first render. Vue’s watch does not run on initialization:

// This will NOT fetch data on component mount
watch(userId, async (id) => {
  userData.value = await fetchUser(id)
})

// CORRECT — add { immediate: true } to replicate OnInitialized + OnParametersSet
watch(userId, async (id) => {
  userData.value = await fetchUser(id)
}, { immediate: true })

// Alternatively, use onMounted for initial fetch and watch for changes
onMounted(() => fetchUser(userId.value))
watch(userId, (id) => fetchUser(id))

Gotcha 4: v-if vs v-show — DOM removal vs CSS hide

v-if="false" removes the element from the DOM entirely. Child components are destroyed; their lifecycle cleanup (onUnmounted) runs. v-show="false" sets display: none — the component stays alive.

<!-- v-if: like conditional rendering in Blazor — component is created/destroyed -->
<HeavyChart v-if="isChartVisible" />

<!-- v-show: like WPF Visibility.Hidden — always rendered, just hidden -->
<!-- Use for elements that toggle frequently and are expensive to mount -->
<TabPanel v-show="activeTab === 'chart'" />

If you toggle something frequently and it has expensive initialization (like a chart with a WebSocket connection), use v-show. Otherwise, use v-if.

Gotcha 5: TypeScript generics in .vue files need a workaround in some setups

When using defineProps<T>() with a generic type parameter imported from another file, you may hit compiler limitations. The type must be defined in the same file or be a simple inline type. This is a Vue/Volar tooling limitation, not a TypeScript limitation:

// This may fail in some tooling versions
import type { UserProps } from './types'
const props = defineProps<UserProps>() // can cause "type argument must be a literal type" error

// Workaround: define inline or re-declare in the same file
interface UserProps {
  userId: number
  displayName: string
}
const props = defineProps<UserProps>() // works

Check that @vue/language-tools (Volar) is up to date before fighting this.

Gotcha 6: reactive() loses reactivity when replaced wholesale

You cannot reassign a reactive() object. You can only mutate its properties:

const form = reactive({ name: '', email: '' })

// WRONG — breaks reactivity. `form` ref in template still points to old object.
form = { name: 'Alice', email: 'alice@example.com' }

// CORRECT — mutate properties individually
form.name = 'Alice'
form.email = 'alice@example.com'

// CORRECT — or use Object.assign for bulk update
Object.assign(form, { name: 'Alice', email: 'alice@example.com' })

Hands-On Exercise

Build a typed contact search component that demonstrates all the concepts in this article.

Requirements:

  • A ContactSearch.vue component that accepts a title prop (string, required) and a maxResults prop (number, optional, default 10)
  • A search input with v-model bound to a ref<string>
  • A computed() property that filters a hardcoded list of contacts by name (case-insensitive, trimmed)
  • A watch on the search query that logs to the console when the query exceeds 50 characters (a “query too long” warning)
  • onMounted that logs “ContactSearch mounted”
  • onUnmounted that logs “ContactSearch unmounted”
  • Display results with v-for and :key
  • Show a “No results” message with v-if when the filtered list is empty
  • Emit a contact-selected event (with the contact name as payload) when a result is clicked
  • Full TypeScript — no any

Starter scaffold:

<!-- ContactSearch.vue -->
<script setup lang="ts">
import { ref, computed, watch, onMounted, onUnmounted } from 'vue'

const CONTACTS = [
  'Alice Martin', 'Bob Chen', 'Carol White', 'David Kim',
  'Eve Johnson', 'Frank Lee', 'Grace Park', 'Henry Brown'
]

// TODO: defineProps with title (required string) and maxResults (optional number)
// TODO: defineEmits with 'contact-selected' event
// TODO: ref for search query
// TODO: computed for filtered contacts (respect maxResults)
// TODO: watch for query length > 50
// TODO: onMounted / onUnmounted lifecycle hooks
// TODO: selectContact function that emits the event
</script>

<template>
  <!-- TODO: render title, input, results list, empty state -->
</template>

Expected output when complete:

A working search box that filters contacts as you type, shows “No results” when nothing matches, and fires a typed event when a result is clicked. Parent components can listen with @contact-selected="handleSelection".


Quick Reference

ConceptVue 3 Composition APIBlazor EquivalentWPF Equivalent
Reactive primitiveconst x = ref(0)private int x = 0 (auto re-render)INotifyPropertyChanged property
Read reactive value (script)x.valuexX (property getter)
Write reactive valuex.value = 1x = 1; StateHasChanged()X = 1 (fires INPC)
Reactive objectreactive({ a: 1 })class with fieldsFull ViewModel class
Computed/derivedcomputed(() => ...)=> expr (C# getter)Computed property
Watch specific valuewatch(x, (n, o) => ...)OnParametersSet / PropertyChangedDependencyProperty.Changed
Watch any dependencywatchEffect(() => ...)MultiBinding
Mount hookonMounted(() => ...)OnAfterRenderAsync(true)Loaded
Unmount/cleanup hookonUnmounted(() => ...)IDisposable.DisposeUnloaded
Receive datadefineProps<T>()[Parameter]DependencyProperty
Emit eventdefineEmits<E>() then emit(...)EventCallback<T>.InvokeAsyncRoutedEvent / delegates
Two-way bindingv-model@bind-Value{Binding Mode=TwoWay}
Conditional renderv-if / v-else@if / elseDataTrigger / Visibility
List renderv-for="x in list" :key="x.id"@foreachItemsSource
Attribute binding:attr="expr"attr="@expr"{Binding}
Event binding@click="handler"@onclick="handler"Button.Click
Scoped CSS<style scoped>CSS Isolation (.razor.css)Control template styles
Import componentimport Comp from './Comp.vue'@using namespacexmlns declaration

Further Reading

Next.js: The ASP.NET MVC of React

For .NET engineers who know: ASP.NET Core MVC, Razor Pages, Minimal APIs, ASP.NET Core middleware You’ll learn: How Next.js brings server-side rendering, file-based routing, API endpoints, and middleware to React — mapping each concept to the ASP.NET Core feature you already know Time: 15-20 min read


The .NET Way (What You Already Know)

ASP.NET Core MVC is a convention-driven framework. Drop a file in the Controllers/ folder, inherit from Controller, name a method correctly, and the framework routes requests to it. Drop a .cshtml file in Views/, follow the naming convention, and the framework knows how to render it. The framework inspects your project structure and wires things up.

Razor Pages sharpens this further. A Pages/Products/Index.cshtml file with a corresponding Index.cshtml.cs page model handles GET /products automatically. The file path is the route. There is no separate routing configuration to maintain.

ASP.NET Core’s pipeline is middleware-based. You register middleware in Program.cs, and every request passes through each piece in order — authentication, CORS, routing, static files, and finally your application logic. Middleware can short-circuit the pipeline by not calling next().

// Program.cs — ASP.NET Core pipeline
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();

var app = builder.Build();

app.UseHttpsRedirection();     // middleware
app.UseStaticFiles();           // middleware
app.UseAuthentication();        // middleware
app.UseAuthorization();         // middleware
app.MapControllers();           // routing

app.Run();
// Razor Pages file structure → URL mapping
Pages/
  Index.cshtml          → GET /
  Products/
    Index.cshtml        → GET /products
    Details.cshtml      → GET /products/details
    [id].cshtml         → GET /products/5  (dynamic segment)
  Account/
    Login.cshtml        → GET /account/login

Next.js is built on exactly these mental models, applied to React.


The Next.js Way

What Next.js Adds to React

Plain React is a UI library — it renders components but provides nothing for routing, server rendering, data fetching, or API endpoints. Every React application needs a framework to supply these things. Next.js is the dominant choice.

CapabilityPlain ReactNext.js
RoutingNone (add React Router)Built-in (file-based)
Server-side renderingNoneBuilt-in
API endpointsNone (need a separate server)Built-in (route.ts)
Data fetchingManual (useEffect + fetch)Structured patterns (async/await in components)
Build optimizationManual (configure webpack)Automatic
MiddlewareNoneBuilt-in (middleware.ts)
Code splittingManualAutomatic

Think of it exactly like the difference between a class library and ASP.NET Core: the library gives you building blocks, the framework gives you a working application.

Project Setup

npx create-next-app@latest my-app --typescript --tailwind --app
cd my-app
npm run dev   # starts on http://localhost:3000

The --app flag selects the App Router (Next.js 13+). This is the current architecture. The older Pages Router still works but is in maintenance mode — equivalent to Web Forms vs. MVC.

App Router: File-Based Routing

The App Router lives in the app/ directory. The file structure is the route structure.

app/
  layout.tsx            → root layout (wraps every page)
  page.tsx              → GET /
  loading.tsx           → loading UI for /
  error.tsx             → error boundary for /
  not-found.tsx         → 404 for /
  products/
    page.tsx            → GET /products
    layout.tsx          → layout for /products/* (nested layout)
    loading.tsx         → loading UI for /products
    [id]/
      page.tsx          → GET /products/123
      edit/
        page.tsx        → GET /products/123/edit
  api/
    users/
      route.ts          → GET/POST /api/users (API endpoint)
    users/[id]/
      route.ts          → GET/PUT/DELETE /api/users/123

This maps directly to how Razor Pages works. Compare them:

// Razor Pages                   → Next.js App Router
Pages/Index.cshtml               → app/page.tsx
Pages/Products/Index.cshtml      → app/products/page.tsx
Pages/Products/Details.cshtml    → app/products/[id]/page.tsx
_Layout.cshtml (shared layout)   → app/layout.tsx
Partial views                    → Shared components in components/

Route segments in square brackets are dynamic — identical to the {id} route template syntax in ASP.NET:

// app/products/[id]/page.tsx
// Handles: /products/1, /products/abc, /products/any-slug

interface PageProps {
  params: { id: string }
  searchParams: { [key: string]: string | string[] | undefined }
}

export default function ProductPage({ params, searchParams }: PageProps) {
  // params.id is the dynamic segment: '123'
  // searchParams.sort is ?sort=price
  return <h1>Product {params.id}</h1>
}

Catch-all routes use [...slug] — equivalent to ASP.NET’s {*path} wildcard:

app/docs/[...slug]/page.tsx  → /docs/intro, /docs/api/users, /docs/a/b/c

layout.tsx: The _Layout.cshtml Equivalent

Every directory can have a layout.tsx that wraps all pages beneath it. Layouts are nested — a child directory’s layout wraps inside the parent layout. This is more composable than ASP.NET’s single _Layout.cshtml:

// app/layout.tsx — root layout, wraps every page
import type { Metadata } from 'next'

export const metadata: Metadata = {
  title: 'My App',
  description: 'Built with Next.js',
}

export default function RootLayout({
  children,
}: {
  children: React.ReactNode
}) {
  return (
    <html lang="en">
      <body>
        <nav>Global navigation</nav>
        <main>{children}</main>  {/* ← equivalent to @RenderBody() */}
        <footer>Footer</footer>
      </body>
    </html>
  )
}
// app/dashboard/layout.tsx — nested layout for /dashboard/*
export default function DashboardLayout({ children }: { children: React.ReactNode }) {
  return (
    <div className="dashboard-container">
      <aside>Sidebar</aside>
      <section>{children}</section>
    </div>
  )
}

A request to /dashboard/settings renders: RootLayout > DashboardLayout > SettingsPage. ASP.NET’s nested layouts are possible but require explicit @RenderSection plumbing. Next.js handles the nesting automatically.

loading.tsx and error.tsx: Built-In UI States

Next.js has dedicated file conventions for loading and error states — concepts that require manual work in ASP.NET:

// app/products/loading.tsx
// Shown automatically while the page.tsx component is loading
export default function Loading() {
  return <div className="skeleton-loader">Loading products...</div>
}
// app/products/error.tsx
// Shown when any error is thrown in page.tsx or its children
// Must be a Client Component (error boundaries require client-side React)
'use client'

interface ErrorProps {
  error: Error & { digest?: string }
  reset: () => void
}

export default function ProductsError({ error, reset }: ErrorProps) {
  return (
    <div>
      <h2>Failed to load products</h2>
      <p>{error.message}</p>
      <button onClick={reset}>Try again</button>
    </div>
  )
}

This is equivalent to try/catch in a Razor Page’s OnGetAsync combined with a partial view for the error state — but declarative and automatic.

Server Components vs. Client Components: The Key Architectural Decision

This is the concept that most surprises .NET engineers, and the one that matters most for performance.

By default, every component in app/ is a Server Component. It runs only on the server — during the build (for static pages) or on each request (for dynamic pages). It can do things a browser component cannot: read files, query databases directly, use secret environment variables, make server-to-server API calls.

A Client Component runs in the browser. It can use useState, useEffect, event handlers, browser APIs (window, localStorage), and third-party components that need the DOM. To mark a component as a client component, add 'use client' at the top of the file.

// app/products/page.tsx — Server Component (no 'use client' = server by default)
// This runs on the server. The browser never downloads this code.
// Think of it like a Razor Page's OnGetAsync — runs server-side, returns rendered HTML.

async function getProducts() {
  // Direct database query here is fine — this code never runs in the browser
  const res = await fetch('https://api.example.com/products', {
    next: { revalidate: 60 } // cache for 60 seconds — explained below
  })
  return res.json()
}

export default async function ProductsPage() {
  const products = await getProducts() // await works directly in Server Components

  return (
    <div>
      <h1>Products</h1>
      <ul>
        {products.map((p: { id: number; name: string; price: number }) => (
          <li key={p.id}>{p.name} — ${p.price}</li>
        ))}
      </ul>
    </div>
  )
}
// components/AddToCartButton.tsx — Client Component
// This needs an onClick handler, so it must run in the browser
'use client'

import { useState } from 'react'

interface Props {
  productId: number
}

export function AddToCartButton({ productId }: Props) {
  const [added, setAdded] = useState(false)

  async function handleAdd() {
    await fetch('/api/cart', {
      method: 'POST',
      body: JSON.stringify({ productId }),
    })
    setAdded(true)
  }

  return (
    <button onClick={handleAdd} disabled={added}>
      {added ? 'Added' : 'Add to Cart'}
    </button>
  )
}

A Server Component can import and render a Client Component. A Client Component cannot import a Server Component (the server code would ship to the browser). This creates a tree: Server Components form the “shell,” Client Components are the interactive islands.

Mental model for .NET engineers: Server Components are like Razor Pages markup — they render HTML on the server and never ship logic to the browser. Client Components are like Blazor WebAssembly — they download to the browser and run there. The difference is that in Next.js, you mix them in the same tree.

API Routes: route.ts as Minimal APIs

The route.ts file convention creates HTTP endpoints — equivalent to ASP.NET Minimal API handlers or [ApiController] methods.

// app/api/products/route.ts
// Handles GET /api/products and POST /api/products

import { NextRequest, NextResponse } from 'next/server'

// Named exports for each HTTP method — like [HttpGet] / [HttpPost] attributes
export async function GET(request: NextRequest) {
  const { searchParams } = new URL(request.url)
  const category = searchParams.get('category') // ?category=electronics

  const products = await fetchProductsFromDb(category)

  return NextResponse.json(products) // equivalent to return Ok(products)
}

export async function POST(request: NextRequest) {
  const body = await request.json()

  // Validate
  if (!body.name || !body.price) {
    return NextResponse.json(
      { error: 'name and price are required' },
      { status: 400 } // equivalent to return BadRequest(...)
    )
  }

  const product = await createProduct(body)
  return NextResponse.json(product, { status: 201 }) // return Created(...)
}
// app/api/products/[id]/route.ts
// Handles GET/PUT/DELETE /api/products/:id

import { NextRequest, NextResponse } from 'next/server'

interface RouteContext {
  params: { id: string }
}

export async function GET(request: NextRequest, { params }: RouteContext) {
  const product = await getProductById(Number(params.id))

  if (!product) {
    return NextResponse.json({ error: 'Not found' }, { status: 404 })
  }

  return NextResponse.json(product)
}

export async function PUT(request: NextRequest, { params }: RouteContext) {
  const body = await request.json()
  const updated = await updateProduct(Number(params.id), body)
  return NextResponse.json(updated)
}

export async function DELETE(_request: NextRequest, { params }: RouteContext) {
  await deleteProduct(Number(params.id))
  return new NextResponse(null, { status: 204 }) // No Content
}

Comparison with ASP.NET Minimal API:

// ASP.NET Minimal API
app.MapGet("/api/products", async (string? category, IProductService svc) =>
{
    var products = await svc.GetProductsAsync(category);
    return Results.Ok(products);
});

app.MapPost("/api/products", async (CreateProductDto dto, IProductService svc) =>
{
    if (string.IsNullOrWhiteSpace(dto.Name))
        return Results.BadRequest("name is required");

    var product = await svc.CreateAsync(dto);
    return Results.Created($"/api/products/{product.Id}", product);
});

The structure is almost identical. The main difference: Next.js routes are file-based (the URL comes from the folder path). ASP.NET routes are code-based (you declare the path in MapGet).

Middleware: middleware.ts

Next.js middleware intercepts requests before they reach any route or page — identical to ASP.NET Core middleware in Program.cs.

// middleware.ts — place at project root (alongside app/ and package.json)
import { NextResponse } from 'next/server'
import type { NextRequest } from 'next/server'

export function middleware(request: NextRequest) {
  const { pathname } = request.nextUrl

  // Auth check — like app.UseAuthorization() combined with [Authorize]
  if (pathname.startsWith('/dashboard')) {
    const token = request.cookies.get('auth-token')?.value

    if (!token) {
      // Redirect to login — like returning a ChallengeResult
      return NextResponse.redirect(new URL('/login', request.url))
    }
  }

  // Add a custom header to all responses
  const response = NextResponse.next()
  response.headers.set('X-Frame-Options', 'DENY')
  return response
}

// Which routes this middleware runs on — like [Authorize] on a controller
export const config = {
  matcher: [
    '/dashboard/:path*',
    '/api/admin/:path*',
    // Exclude static files and Next.js internals
    '/((?!_next/static|_next/image|favicon.ico).*)',
  ],
}

Unlike ASP.NET middleware, Next.js middleware runs at the Edge — it executes close to the user before hitting your server. It cannot use Node.js APIs (fs, crypto, etc.) or access a database. It is limited to the Edge Runtime, which is a subset of the Web APIs. Use it for auth redirects, request rewriting, and header manipulation — not business logic.

Data Fetching Patterns

Next.js has three data rendering models, directly analogous to ASP.NET rendering strategies:

Static Generation (SSG) — like pre-generating HTML from a build step (comparable to pre-rendering in Blazor or publishing a static site):

// No dynamic behavior — this page is built once at deploy time
// Like a completely static HTML file
export default async function AboutPage() {
  const content = await fetch('https://cms.example.com/about').then(r => r.json())
  return <article>{content.body}</article>
}

Incremental Static Regeneration (ISR) — regenerate pages in the background on a timer:

async function getData() {
  const res = await fetch('https://api.example.com/products', {
    next: { revalidate: 3600 } // rebuild this page every hour in the background
  })
  return res.json()
}

Dynamic (Server-Side Rendering) — like a Razor Page’s OnGetAsync — renders fresh HTML on every request:

async function getData() {
  const res = await fetch('https://api.example.com/cart', {
    cache: 'no-store' // never cache — render fresh every request
  })
  return res.json()
}

The general heuristic: static for marketing pages, ISR for product catalogs that change occasionally, dynamic for authenticated/personalized content.

Environment Variables

Next.js environment variables work like appsettings.json + IConfiguration:

# .env.local (like appsettings.Development.json — not committed to git)
DATABASE_URL=postgresql://localhost:5432/mydb
NEXTAUTH_SECRET=supersecret

# Variables prefixed NEXT_PUBLIC_ are included in browser bundles
# Without the prefix, they are server-only (like ConnectionStrings in appsettings.json)
NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=pk_test_...
// Server Component or API route — all env vars available
const dbUrl = process.env.DATABASE_URL // works

// Client Component — only NEXT_PUBLIC_ vars available
const stripeKey = process.env.NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY // works
const secret = process.env.NEXTAUTH_SECRET // undefined in browser — intentional

This maps to the ASP.NET pattern where ConnectionStrings stay server-side in configuration and are never exposed to the client, while public keys (like a maps API key) are in appsettings.json and may be included in page output.


Project Structure: Next.js Mapped to ASP.NET MVC

ASP.NET MVC                    Next.js (App Router)
────────────────────────────────────────────────────
Controllers/                   app/api/
  ProductsController.cs          products/route.ts
  UsersController.cs             users/route.ts

Views/                         app/
  Products/                    products/
    Index.cshtml                 page.tsx
    Details.cshtml               [id]/page.tsx
    Create.cshtml                new/page.tsx
  Shared/                      components/ (convention)
    _Layout.cshtml               layout.tsx (in app/)
    _ProductCard.cshtml          ProductCard.tsx

Models/                        types/ or lib/types.ts
  Product.cs                     (TypeScript interfaces)
  CreateProductDto.cs

Services/                      lib/ or services/
  IProductService.cs             product-service.ts
  ProductService.cs              (no interface needed — TS structural typing)

wwwroot/                       public/
  css/                           (static files served as-is)
  js/
  images/

Program.cs                     middleware.ts + next.config.ts
appsettings.json               .env.local / .env.production

Key Differences

No Dependency Injection Container

ASP.NET Core has a built-in IoC container. You register services in Program.cs and inject them via constructors. Next.js has no equivalent. You import functions directly. Inversion of control is achieved through module boundaries and passing dependencies as function arguments or React context — not a container.

// ASP.NET — DI container
builder.Services.AddScoped<IProductService, ProductService>();
// Injected via constructor
public ProductsController(IProductService productService) { ... }
// Next.js — direct import
import { getProducts } from '@/lib/product-service'
// Called directly
const products = await getProducts()

For testing, you pass mock implementations as arguments or use jest module mocking — no mock container required.

No Global HttpContext — Requests Are Passed as Parameters

In ASP.NET, HttpContext is ambient — available anywhere via IHttpContextAccessor. In Next.js, the NextRequest object is passed explicitly to route handlers and middleware. Server Components access request data through Next.js functions like cookies() and headers() from next/headers.

import { cookies, headers } from 'next/headers'

export default async function ProfilePage() {
  const cookieStore = cookies()
  const token = cookieStore.get('auth-token')

  const headersList = headers()
  const userAgent = headersList.get('user-agent')

  return <div>Profile page</div>
}

'use client' Is a Module-Level Decision

Once a file has 'use client', every component in that file becomes a Client Component. This boundary propagates — all children of a Client Component are also treated as client-side, unless you pass Server Component output as children. Plan your component tree so that interactive leaf components are small Client Components and data-fetching parents are Server Components.


Gotchas for .NET Engineers

Gotcha 1: async/await in Server Components works; in Client Components it does not

The most common early mistake. Server Components can be async functions. Client Components cannot use await at the top level of a component (you cannot write const data = await fetch(...) directly in a Client Component). Client Components use useEffect + useState for data fetching, or a library like react-query / SWR.

// CORRECT — Server Component
export default async function ProductsPage() {
  const products = await getProducts() // fine
  return <ul>...</ul>
}

// WRONG — Client Component
'use client'
export default async function Counter() { // async client component is invalid
  const data = await fetch('/api/data') // do not do this
}

// CORRECT — Client Component with data fetching
'use client'
import { useState, useEffect } from 'react'

export default function Counter() {
  const [data, setData] = useState(null)
  useEffect(() => {
    fetch('/api/data').then(r => r.json()).then(setData)
  }, [])
  return <div>{data ? JSON.stringify(data) : 'Loading...'}</div>
}

Gotcha 2: The params object in page and route components will become async in Next.js 15

Starting in Next.js 15, params and searchParams in page.tsx and route.ts are Promises, not plain objects. If you are on Next.js 15+, you must await params:

// Next.js 14 and earlier
export default function Page({ params }: { params: { id: string } }) {
  return <div>{params.id}</div>
}

// Next.js 15+
export default async function Page({ params }: { params: Promise<{ id: string }> }) {
  const { id } = await params
  return <div>{id}</div>
}

This change caught many teams mid-upgrade. Check your Next.js version and the upgrade guide when migrating.

Gotcha 3: Middleware runs on the Edge Runtime — no Node.js APIs

If you try to import a Node.js module (like fs, path, crypto, or a database client) in middleware.ts, the build will fail or the feature will be silently broken. Middleware runs in the Edge Runtime, which only exposes a subset of Web APIs. Do your database access and business logic in Server Components and API routes, not middleware.

// middleware.ts
import { db } from '@/lib/database' // WRONG — database clients use Node.js APIs

// Use middleware only for things the Edge Runtime supports:
// - cookies(), headers()
// - NextResponse.redirect(), NextResponse.rewrite()
// - JWT verification (using the Web Crypto API, not Node's crypto module)
// - Checking a feature flag from a lightweight store (Vercel Edge Config, etc.)

Gotcha 4: Environment variables without NEXT_PUBLIC_ are not available in the browser

A common confusion: you define API_KEY=abc in .env.local, use it in a Server Component, and it works. Then you copy the same code to a Client Component and get undefined. The variable never shipped to the browser. This is intentional security behavior — not a bug.

// .env.local
DATABASE_URL=postgresql://...     // server-only
NEXT_PUBLIC_GOOGLE_MAPS_KEY=...   // included in browser bundle (safe to expose)

// Always ask: would it be OK if this value appeared in the browser's source?
// If no → no NEXT_PUBLIC_ prefix
// If yes → add NEXT_PUBLIC_ prefix

Gotcha 5: fetch is extended by Next.js — not the standard browser API

In Server Components, fetch is the standard Web API, but Next.js wraps it to add caching directives (next: { revalidate: N }, cache: 'no-store'). This is a Next.js-specific extension. When you move code from a Server Component to a Client Component, these caching options are silently ignored. Never write data-fetching code with next: { revalidate } options in a Client Component — it will appear to work but will not cache as expected.

Gotcha 6: The app/ and pages/ directories cannot coexist in a real project without careful configuration

The old Pages Router (pages/) and new App Router (app/) can technically coexist to support incremental migration, but they have different semantics, different data-fetching patterns, and different layout systems. Mixing them for features (not migration) creates confusion. On a new project, use only app/. On an existing project with pages/, migrate incrementally by moving routes one at a time.


Hands-On Exercise

Build a small Next.js product catalog with the following structure. This exercises file-based routing, Server Components, a Client Component island, an API route, and middleware.

Structure to implement:

app/
  layout.tsx              → Global layout with a nav bar
  page.tsx                → Home page: static, shows "Welcome"
  products/
    page.tsx              → Server Component: fetches and lists products from the API route
    [id]/
      page.tsx            → Server Component: fetches one product by id
  api/
    products/
      route.ts            → GET: return a hardcoded list; POST: log body and return 201
      [id]/
        route.ts          → GET: return single product or 404
middleware.ts             → Log every request path to console; redirect /old-products to /products
components/
  FavoriteButton.tsx      → Client Component with useState to toggle a favorite icon

Requirements:

  • All pages are Server Components (no 'use client' in app/)
  • FavoriteButton is a Client Component, imported into the [id] product detail page
  • middleware.ts uses console.log(request.nextUrl.pathname) and a redirect
  • API routes handle only GET and POST, return NextResponse.json()
  • TypeScript throughout — define a Product interface in types/index.ts

Quick Reference

ConceptNext.jsASP.NET Equivalent
Route to a pageapp/products/page.tsxPages/Products/Index.cshtml
Dynamic segmentapp/products/[id]/page.tsxPages/Products/{id}.cshtml
Catch-all routeapp/[...slug]/page.tsx{*path} wildcard route
Shared layoutapp/layout.tsx_Layout.cshtml
Nested layoutapp/products/layout.tsxNested _Layout.cshtml (manual)
Loading stateapp/products/loading.tsxManual spinner via partial view
Error boundaryapp/products/error.tsxtry/catch + error partial
404 pageapp/not-found.tsx404.cshtml / UseStatusCodePages
API endpointapp/api/users/route.ts[ApiController] / Minimal API
Middlewaremiddleware.tsProgram.cs middleware pipeline
Server-only codeServer Component (default)Controller / Razor Page codebehind
Client interactivity'use client' + hooksBlazor / JavaScript
Static generationDefault fetch (no cache)Pre-rendered static HTML
ISR (periodic rebuild)next: { revalidate: N }OutputCache with sliding expiry
Dynamic SSRcache: 'no-store'No caching — fresh on each request
Environment config.env.localappsettings.Development.json
Server-only env varVARIABLE_NAME (no prefix)ConnectionStrings, secrets
Browser env varNEXT_PUBLIC_VARPublic config / page model output
Static assetspublic/ folderwwwroot/
TypeScript configtsconfig.json.csproj / global.json
Start dev servernpm run devdotnet watch
Production buildnpm run builddotnet publish

Further Reading

Nuxt 3: Vue’s Answer to Next.js

For .NET engineers who know: ASP.NET Core, Razor Pages, and the Next.js concepts from the previous article You’ll learn: How Nuxt 3 applies the same meta-framework ideas as Next.js to Vue 3, what makes Nuxt more opinionated and convention-driven (and why that is often an advantage), and how the same features compare between the two frameworks Time: 15-20 min read


The .NET Way (What You Already Know)

If you had to choose a single word to describe ASP.NET Core’s design philosophy, it would be “convention.” Drop a file in the right place with the right name, follow the expected structure, and the framework takes care of the wiring. You do not configure what you do not need to change. The framework has a preferred way to do most things, and that preferred way is well-documented, consistent, and supported.

This is different from a “library-first” philosophy, where you assemble individual pieces and wire them yourself. ASP.NET Core is a full framework — routing, DI, middleware, configuration, logging, authentication are all provided and integrated by default. You extend the framework. The framework does not step aside.

Nuxt 3 is built on this same philosophy. It calls itself “the intuitive Vue framework,” and what that means in practice is: Nuxt makes the decisions. File-based routing, auto-imported components and composables, server routes, state management, data fetching — all of these have a prescribed, enforced convention. You do not configure a router. You do not import Vue Router. You create files in the right directories and Nuxt handles the rest.

Next.js, by contrast, gives you more explicit control. You opt into conventions. You configure what you want. It is closer to a well-structured library than a full framework.

Understanding this distinction — Nuxt is more opinionated, Next.js is more explicit — is the key to understanding when to choose each, and how they differ in practice.


The Nuxt 3 Way

Project Setup

npx nuxi@latest init my-nuxt-app
cd my-nuxt-app
npm install
npm run dev   # starts on http://localhost:3000

The initial structure:

my-nuxt-app/
  app.vue             → root component (like app/layout.tsx in Next.js)
  nuxt.config.ts      → framework configuration
  pages/              → file-based routes (create this directory to enable routing)
  components/         → auto-imported components
  composables/        → auto-imported composables
  server/
    api/              → server API routes
    middleware/       → server middleware
  public/             → static assets (like wwwroot/)
  assets/             → processed assets (images, fonts, global CSS)

File-Based Routing

Nuxt’s routing works identically to Next.js at the conceptual level, but with a different directory structure and slightly different file conventions.

pages/                          Next.js app/
  index.vue         → /           page.tsx       → /
  about.vue         → /about      about/page.tsx → /about
  products/
    index.vue       → /products   products/page.tsx
    [id].vue        → /products/5 products/[id]/page.tsx
    [...slug].vue   → catch-all   [...slug]/page.tsx

Key difference: in Nuxt, a single .vue file handles a route (e.g., pages/about.vue handles /about). In Next.js, routes require a directory with a page.tsx inside it. Nuxt’s approach produces fewer files and directories for simple route structures.

<!-- pages/products/[id].vue -->
<script setup lang="ts">
// useRoute() is auto-imported — no import statement needed
const route = useRoute()
const productId = Number(route.params.id)

// useFetch is auto-imported — see Data Fetching section
const { data: product, error } = await useFetch<Product>(`/api/products/${productId}`)
</script>

<template>
  <div v-if="product">
    <h1>{{ product.name }}</h1>
    <p>{{ product.description }}</p>
  </div>
  <div v-else-if="error">Error loading product</div>
</template>

Notice: no imports. useRoute and useFetch are automatically available in every .vue file inside pages/. This is Nuxt’s auto-import system.

Auto-Imports: Nuxt’s Most Distinctive Feature

This is the feature that feels most foreign to both .NET engineers and Next.js developers. In Nuxt, you never write import statements for:

  • Vue Composition API functions (ref, computed, watch, onMounted, etc.)
  • Nuxt composables (useFetch, useRoute, useRouter, useState, etc.)
  • Your own composables placed in the composables/ directory
  • Your own components placed in the components/ directory

They are all available everywhere, automatically:

<!-- pages/example.vue — zero import statements -->
<script setup lang="ts">
// All of these are auto-imported:
const count = ref(0)                          // from Vue
const route = useRoute()                      // from Nuxt
const { data } = await useFetch('/api/data')  // from Nuxt

// Your own composable in composables/useCounter.ts — auto-imported
const { increment, decrement } = useCounter()
</script>

<template>
  <!-- MyButton.vue in components/ — auto-imported as <MyButton /> -->
  <MyButton @click="increment">Click me</MyButton>
</template>

The equivalent in Next.js requires explicit imports in every file:

// Next.js — explicit imports everywhere
import { useState, useEffect } from 'react'
import { useRouter } from 'next/navigation'
import { MyButton } from '@/components/MyButton'

For .NET engineers: Auto-imports feel like C#’s global usings (global using System; in modern .NET), combined with the convention that any class in a specific namespace is available everywhere. It is a build-time feature — Nuxt generates a TypeScript declaration file so your editor’s autocomplete still works.

The tradeoff: auto-imports are convenient and eliminate boilerplate, but they obscure where things come from. When you see useFetch with no import, you need to know whether it is from Nuxt, from a third-party module, or from your own composables/ directory. Teams with strong conventions manage this well. Teams without them can end up with naming collisions or confusion.

Layouts

Nuxt has a dedicated layouts/ directory. The layouts/default.vue layout automatically wraps every page:

<!-- layouts/default.vue — wraps all pages by default -->
<template>
  <div>
    <AppHeader />
    <main>
      <slot />   <!-- equivalent to @RenderBody() in Razor / {children} in Next.js -->
    </main>
    <AppFooter />
  </div>
</template>

Switching to a different layout in a specific page:

<!-- pages/account/settings.vue -->
<script setup lang="ts">
// definePageMeta is a Nuxt macro — like [Authorize] or @attribute [Authorize] in Blazor
definePageMeta({
  layout: 'dashboard',  // uses layouts/dashboard.vue instead of layouts/default.vue
  middleware: 'auth',   // runs middleware/auth.ts before this page
})
</script>

<template>
  <h1>Account Settings</h1>
</template>
<!-- layouts/dashboard.vue — used only on pages that opt in -->
<template>
  <div class="dashboard">
    <DashboardSidebar />
    <div class="content">
      <slot />
    </div>
  </div>
</template>

This is more explicit than Next.js nested layouts (which are implicit based on directory structure) but less automatic. Each page declares which layout it uses rather than inheriting one from its directory hierarchy.

Data Fetching: useFetch and useAsyncData

Nuxt provides two composables for data fetching that handle SSR correctly — they run on the server during page generation and serialize their results to the client, avoiding a second fetch on hydration.

useFetch — the high-level shorthand. Use this for most cases:

<script setup lang="ts">
interface Product {
  id: number
  name: string
  price: number
}

// Runs on server during SSR, result hydrated to client — no double-fetch
const {
  data: products,    // Ref<Product[] | null>
  pending,           // Ref<boolean> — is the request in flight?
  error,             // Ref<Error | null>
  refresh,           // () => Promise<void> — manually re-fetch
} = await useFetch<Product[]>('/api/products', {
  query: { category: 'electronics' },   // adds ?category=electronics
  headers: { 'X-Custom-Header': 'val' },
  // cache key — Nuxt deduplicates requests with the same key
  key: 'products-electronics',
})
</script>

<template>
  <div v-if="pending">Loading...</div>
  <div v-else-if="error">{{ error.message }}</div>
  <ul v-else>
    <li v-for="product in products" :key="product.id">
      {{ product.name }} — ${{ product.price }}
    </li>
  </ul>
</template>

useAsyncData — the lower-level primitive. Use this when you need more control, or when fetching from something other than a URL (a database client, a CMS SDK, etc.):

<script setup lang="ts">
// useAsyncData wraps any async function
const { data: user } = await useAsyncData('current-user', async () => {
  // This function runs on the server — can use server-only code
  const response = await $fetch('/api/users/me')
  return response
})

// With dependencies — re-fetches when `userId` changes
const userId = ref(1)
const { data: userProfile } = await useAsyncData(
  () => `user-profile-${userId.value}`,  // dynamic cache key
  () => $fetch(`/api/users/${userId.value}`),
  { watch: [userId] }                     // re-fetch when userId changes
)
</script>

$fetch (note the $ prefix) is Nuxt’s enhanced fetch — on the server it calls the URL directly (bypassing HTTP overhead for same-app API routes), and on the client it makes a normal HTTP request. It is auto-imported.

Comparison with Next.js data fetching:

// Next.js — Server Component (no dedicated hook, just async/await)
export default async function ProductsPage() {
  const products = await fetch('/api/products', {
    next: { revalidate: 60 }
  }).then(r => r.json())

  return <ProductList products={products} />
}
<!-- Nuxt — any component (useFetch handles SSR automatically) -->
<script setup lang="ts">
const { data: products } = await useFetch<Product[]>('/api/products', {
  getCachedData: (key, nuxtApp) =>
    nuxtApp.payload.data[key] // use cached data if available
})
</script>

The key difference: Next.js data fetching happens in Server Components (you cannot use await in a Client Component at the top level). Nuxt’s useFetch works in any component and handles SSR/hydration automatically.

Server Routes: server/api/

Nuxt’s server routes live in the server/api/ directory. Each file exports a default handler using defineEventHandler. The pattern is close to Next.js’s route.ts, but the method routing is handled differently:

// server/api/products/index.get.ts
// The .get. in the filename restricts this to GET requests
// (Next.js uses named exports: export async function GET() {})

import { defineEventHandler, getQuery } from 'h3'

export default defineEventHandler(async (event) => {
  const query = getQuery(event)  // equivalent to request.nextUrl.searchParams
  const category = query.category as string | undefined

  const products = await getProductsFromDb(category)
  return products  // Nuxt serializes this to JSON automatically
})
// server/api/products/index.post.ts
import { defineEventHandler, readBody } from 'h3'

export default defineEventHandler(async (event) => {
  const body = await readBody(event)  // equivalent to request.json()

  if (!body.name || !body.price) {
    throw createError({
      statusCode: 400,
      statusMessage: 'name and price are required'
    })
  }

  const product = await createProduct(body)
  setResponseStatus(event, 201)
  return product
})
// server/api/products/[id].get.ts
import { defineEventHandler, getRouterParam } from 'h3'

export default defineEventHandler(async (event) => {
  const id = Number(getRouterParam(event, 'id'))
  const product = await getProductById(id)

  if (!product) {
    throw createError({ statusCode: 404, statusMessage: 'Product not found' })
  }

  return product
})

Nuxt’s server layer uses h3 as its HTTP framework — a minimal, cross-runtime HTTP library. It is analogous to ASP.NET’s Kestrel layer. You rarely interact with it directly; Nuxt’s abstractions (defineEventHandler, getQuery, readBody, createError) cover the common cases.

File naming convention for HTTP methods:

File nameHTTP method
route.ts or index.tsAll methods
route.get.tsGET only
route.post.tsPOST only
route.put.tsPUT only
route.delete.tsDELETE only

State Management with Pinia

Nuxt recommends Pinia for shared state management — the equivalent of a Redux store (in React) or a singleton service in ASP.NET’s DI container. Pinia is more ergonomic than Vuex and works natively with TypeScript.

The @pinia/nuxt module auto-integrates Pinia with Nuxt’s SSR pipeline:

npm install pinia @pinia/nuxt
// nuxt.config.ts
export default defineNuxtConfig({
  modules: ['@pinia/nuxt'],
})
// stores/useCartStore.ts
import { defineStore } from 'pinia'

interface CartItem {
  id: number
  name: string
  quantity: number
  price: number
}

export const useCartStore = defineStore('cart', () => {
  // State — equivalent to fields in a service class
  const items = ref<CartItem[]>([])

  // Computed — equivalent to read-only properties
  const total = computed(() =>
    items.value.reduce((sum, item) => sum + item.price * item.quantity, 0)
  )
  const itemCount = computed(() =>
    items.value.reduce((sum, item) => sum + item.quantity, 0)
  )

  // Actions — equivalent to service methods
  function addItem(item: Omit<CartItem, 'quantity'>) {
    const existing = items.value.find(i => i.id === item.id)
    if (existing) {
      existing.quantity++
    } else {
      items.value.push({ ...item, quantity: 1 })
    }
  }

  function removeItem(id: number) {
    items.value = items.value.filter(i => i.id !== id)
  }

  function clearCart() {
    items.value = []
  }

  return { items, total, itemCount, addItem, removeItem, clearCart }
})

Using the store in any component (auto-imported because it is in composables/ or stores/):

<script setup lang="ts">
// useCartStore is auto-imported if placed in composables/ or stores/
// Otherwise import it explicitly
const cart = useCartStore()
</script>

<template>
  <div>
    <p>{{ cart.itemCount }} items — Total: ${{ cart.total.toFixed(2) }}</p>
    <button @click="cart.addItem({ id: 1, name: 'Widget', price: 9.99 })">
      Add Widget
    </button>
  </div>
</template>

For .NET engineers: a Pinia store is conceptually identical to an IMyService singleton registered in the DI container. It holds state, exposes computed values, and provides methods to mutate state. The difference is that it is reactive — components that read from the store automatically re-render when the store’s state changes, with no explicit subscriptions.

Middleware

Nuxt has two types of middleware:

Route middleware — runs before navigating to a page. Declared in middleware/ directory, used in pages via definePageMeta. Equivalent to an MVC ActionFilter or Razor Page’s OnPageHandlerExecuting:

// middleware/auth.ts
export default defineNuxtRouteMiddleware((to, from) => {
  const { loggedIn } = useAuth()

  if (!loggedIn.value) {
    // Redirect — like returning a RedirectToActionResult in MVC
    return navigateTo('/login')
  }
})
<!-- pages/dashboard/index.vue — opts in to the auth middleware -->
<script setup lang="ts">
definePageMeta({
  middleware: 'auth'
})
</script>

Server middleware — runs on every request before routing. Placed in server/middleware/. Equivalent to ASP.NET Core middleware in Program.cs:

// server/middleware/request-logger.ts
import { defineEventHandler, getRequestURL } from 'h3'

export default defineEventHandler((event) => {
  const url = getRequestURL(event)
  console.log(`${new Date().toISOString()} ${event.method} ${url.pathname}`)
  // No return value — falls through to the next handler (like calling next() in ASP.NET)
})

The Nuxt Module Ecosystem

Nuxt’s module system is comparable to ASP.NET NuGet packages that extend the framework — not just add a library, but actually integrate with the framework’s pipeline. A Nuxt module can add routes, extend the build pipeline, register server middleware, and configure the auto-import system.

Common modules you will encounter:

ModuleEquivalent in ASP.NET world
@nuxtjs/tailwindcssTailwind CSS integration
@pinia/nuxtPinia SSR integration
@nuxtjs/color-modeDark mode with SSR
nuxt-auth-utilsSession management (like ASP.NET Identity cookie auth)
@nuxt/imageImage optimization (like Azure CDN image processing)
@nuxtjs/i18nLocalization (like ASP.NET Core’s IStringLocalizer)
@nuxt/contentFile-based CMS (Markdown/YAML content)

Adding a module:

// nuxt.config.ts
export default defineNuxtConfig({
  modules: [
    '@pinia/nuxt',
    '@nuxtjs/tailwindcss',
    '@nuxt/image',
  ],
  // Module-specific configuration
  image: {
    quality: 80,
    formats: ['webp', 'jpeg'],
  },
})

Nuxt vs. Next.js: Side-by-Side

The best way to see the differences is to implement the same feature in both frameworks.

Feature: Product listing page with server-side data fetching

// Next.js — app/products/page.tsx
// Must be a Server Component to fetch server-side
// Client interactivity requires a separate 'use client' component

import { ProductCard } from '@/components/ProductCard'
import type { Product } from '@/types'

async function getProducts(): Promise<Product[]> {
  const res = await fetch(`${process.env.API_URL}/products`, {
    next: { revalidate: 60 }
  })
  if (!res.ok) throw new Error('Failed to fetch products')
  return res.json()
}

export default async function ProductsPage() {
  const products = await getProducts()

  return (
    <main>
      <h1>Products</h1>
      <div className="grid grid-cols-3 gap-4">
        {products.map(p => (
          <ProductCard key={p.id} product={p} />
        ))}
      </div>
    </main>
  )
}
<!-- Nuxt — pages/products.vue -->
<!-- Works as SSR or SSG with no special component type needed -->

<script setup lang="ts">
import type { Product } from '~/types'

const { data: products, error } = await useFetch<Product[]>('/api/products', {
  key: 'products-list'
})
</script>

<template>
  <main>
    <h1>Products</h1>
    <div class="grid grid-cols-3 gap-4">
      <ProductCard
        v-for="product in products"
        :key="product.id"
        :product="product"
      />
    </div>
  </main>
</template>

Feature: Auth-protected page

// Next.js — middleware.ts (global) + Server Component check
// middleware.ts
import { NextResponse } from 'next/server'
import type { NextRequest } from 'next/server'

export function middleware(request: NextRequest) {
  if (request.nextUrl.pathname.startsWith('/dashboard')) {
    const token = request.cookies.get('auth-token')
    if (!token) {
      return NextResponse.redirect(new URL('/login', request.url))
    }
  }
  return NextResponse.next()
}
// Nuxt — middleware/auth.ts (per-route middleware)
export default defineNuxtRouteMiddleware(() => {
  const { loggedIn } = useAuth()
  if (!loggedIn.value) {
    return navigateTo('/login')
  }
})
<!-- Nuxt — pages/dashboard/index.vue — opt-in to the middleware -->
<script setup lang="ts">
definePageMeta({ middleware: 'auth' })
</script>

Feature: Global state

// Next.js — React Context (or Zustand/Jotai)
// providers/CartProvider.tsx
'use client'
import { createContext, useContext, useState } from 'react'

const CartContext = createContext<CartContextType | null>(null)

export function CartProvider({ children }: { children: React.ReactNode }) {
  const [items, setItems] = useState<CartItem[]>([])
  // ...
  return <CartContext.Provider value={{ items, addItem, removeItem }}>{children}</CartContext.Provider>
}

export function useCart() {
  const ctx = useContext(CartContext)
  if (!ctx) throw new Error('useCart must be inside CartProvider')
  return ctx
}

// Must wrap the app in app/layout.tsx:
// <CartProvider>{children}</CartProvider>
// Nuxt — Pinia store (no Provider wrapper needed)
// stores/useCartStore.ts
export const useCartStore = defineStore('cart', () => {
  const items = ref<CartItem[]>([])
  function addItem(item: CartItem) { items.value.push(item) }
  return { items, addItem }
})

// Used anywhere without a Provider:
// const cart = useCartStore()

Summary comparison table

FeatureNext.jsNuxt 3
PhilosophyExplicit — you configure what you needConvention — framework decides, you extend
Routingapp/ directory with page.tsx per routepages/ directory with route.vue per route
Data fetchingasync/await in Server ComponentsuseFetch / useAsyncData in any component
SSR/SSG/ISRConfigured per fetch call (cache, revalidate)Configured via useFetch options and nuxt.config.ts
Server routesapp/api/route.ts (named method exports)server/api/route.method.ts (file name = method)
API layerH3 via Next.js wrappingH3 directly (Nuxt is built on Nitro which uses H3)
Client-only code'use client' directiveClient-only components in components/client/
Auto-importsNo — explicit imports requiredYes — Vue, Nuxt composables, your code, components
State managementReact Context, Zustand, Jotai (your choice)Pinia (officially recommended and integrated)
Route middlewaremiddleware.ts (global, Edge Runtime only)middleware/name.ts (per-page opt-in or global)
Layoutsapp/layout.tsx — nesting by directorylayouts/ directory — explicit opt-in per page
Module systemNo dedicated module system@nuxt/module-builder — deep framework integration
ASP.NET analogyClosest to Razor Pages (explicit, structured)Closest to ASP.NET Core with conventions baked in
Learning curveLower if you know ReactLower if you know Vue 3
FlexibilityMore — fewer enforced conventionsLess — more decisions made for you

Key Differences

Nuxt Is Built on Nitro, Which Is Not Node.js-Specific

Next.js is tightly coupled to Node.js (with some Edge Runtime support via Vercel). Nuxt’s server layer is built on Nitro, a server toolkit that can compile and deploy to Node.js, Deno, Bun, Cloudflare Workers, Vercel Edge Functions, and other runtimes. A Nuxt app can be deployed to virtually any hosting environment that supports any of these runtimes — with a single configuration change in nuxt.config.ts.

The server/ Directory Is a Full Nitro Server

Everything in server/api/, server/routes/, server/middleware/, and server/plugins/ runs on Nitro — a full server environment with access to the filesystem, database connections, and all Node.js APIs. Unlike Next.js middleware (which runs on the Edge Runtime and is restricted to Web APIs), Nuxt’s server middleware is unrestricted.

useState in Nuxt Is an SSR-Safe Shared Ref

Nuxt has its own useState composable, which is different from React’s useState. Nuxt’s useState creates a shared, SSR-safe ref — the same value is available across all components during a single request on the server, and it is hydrated into the client. Use it instead of ref when you need state shared between components that survives SSR hydration:

// This state is shared across components and survives server→client hydration
const theme = useState<'light' | 'dark'>('app-theme', () => 'light')

Do not confuse this with React’s useState, which is local component state.


Gotchas for .NET Engineers

Gotcha 1: Auto-imports are build-time, not runtime magic

Auto-imports feel like they might have runtime overhead or rely on global scope. They do not. Nuxt’s build tool (powered by unimport) statically analyzes your code and injects the correct import statements before compiling. The output is identical to code you would write with explicit imports. Your editor (with the Nuxt VS Code extension) gets full TypeScript completion because Nuxt generates a .nuxt/types/ directory with declarations for all auto-imported items.

If you see a TypeScript error saying useFetch is not defined, run npx nuxi prepare to regenerate the type declarations:

npx nuxi prepare   # regenerates .nuxt/types — run this after adding modules

Gotcha 2: useFetch key collisions cause data to bleed between routes

Every useFetch call has a cache key. If two different useFetch calls on different pages use the same key (or if you do not provide a key and Nuxt generates a collision), the cached data from one call may be returned to the other. Always provide explicit, unique keys for any fetch that uses dynamic parameters:

// WRONG — collision if two product pages are SSR'd in the same Nuxt process
const { data } = await useFetch(`/api/products/${id}`)

// CORRECT — unique key includes the dynamic value
const { data } = await useFetch(`/api/products/${id}`, {
  key: `product-${id}`
})

This is especially important in production with Nuxt’s payload extraction — incorrect keys will serve wrong data to the wrong users.

Gotcha 3: Server routes are not automatically type-safe with useFetch

Unlike tRPC or Nuxt’s experimental typed fetch, a plain useFetch('/api/products') call has no compile-time connection to the server/api/products/index.get.ts handler’s return type. You must manually annotate the generic type parameter. Consider using $fetch.native typed with your DTOs, or the useNuxtData pattern for type sharing:

// You are responsible for keeping these in sync
interface Product { id: number; name: string; price: number }
const { data } = await useFetch<Product[]>('/api/products')

For a fully type-safe API layer in a production Nuxt app, consider nuxt-typed-router and organizing shared types in a shared/ or types/ directory that both server/ and pages/ import.

Gotcha 4: The pages/ directory must exist for routing to be enabled

In a fresh Nuxt project, if you delete or never create pages/, Nuxt falls back to using app.vue as a single-page application with no routing. The router is not included in the bundle until you create a pages/ directory. This is a surprising failure mode: you add a page file in the wrong directory, wonder why it does not route, and eventually discover that pages/ was missing entirely:

# If routing is not working, verify this directory exists:
ls pages/

# And that nuxt.config.ts does not disable the pages feature:
# pages: false  ← this disables routing if set

Gotcha 5: Server middleware in server/middleware/ runs on every request including API routes

Unlike Next.js middleware (which you opt routes into via the matcher config), Nuxt server middleware runs on every single request to the server — including your API routes in server/api/. If your server middleware does something expensive (a database lookup, an external API call), it will run on every API request too. Use getRequestURL(event) to gate your middleware to specific paths:

// server/middleware/auth-check.ts
export default defineEventHandler((event) => {
  const url = getRequestURL(event)

  // Only apply to non-public routes
  if (url.pathname.startsWith('/api/public')) {
    return // skip — equivalent to calling next() without doing anything
  }

  // Apply auth check to everything else
  const token = getCookie(event, 'auth-token')
  if (!token) {
    throw createError({ statusCode: 401, statusMessage: 'Unauthorized' })
  }
})

Gotcha 6: definePageMeta cannot use runtime values

definePageMeta is a compiler macro — it is evaluated at build time, not runtime. You cannot use reactive values or computed data inside it:

// WRONG — userRole is runtime state, definePageMeta is build-time
const userRole = ref('admin')
definePageMeta({
  middleware: userRole.value === 'admin' ? 'admin-auth' : 'user-auth' // build error
})

// CORRECT — put the logic in the middleware itself
definePageMeta({ middleware: 'role-auth' })

// middleware/role-auth.ts
export default defineNuxtRouteMiddleware(() => {
  const { role } = useAuth()
  if (role.value !== 'admin') return navigateTo('/unauthorized')
})

Hands-On Exercise

Build a small Nuxt 3 product catalog that mirrors the Next.js exercise from the previous article. Implement the same feature set so you can compare the approaches.

Requirements:

pages/
  index.vue           → Home page: static, "Welcome to the store"
  products/
    index.vue         → List all products (useFetch from server route)
    [id].vue          → Product detail page with a FavoriteButton component
server/
  api/
    products/
      index.get.ts    → Return hardcoded product list as JSON
      [id].get.ts     → Return single product or 404
  middleware/
    logger.ts         → Log every request path with timestamp
components/
  FavoriteButton.vue  → Client-side component: toggle favorite state with ref()
  ProductCard.vue     → Server-rendered card: accepts product prop
layouts/
  default.vue         → Site header with nav links; <slot /> for content
middleware/
  auth.ts             → Check for a 'loggedIn' cookie; redirect to /login if absent
pages/
  dashboard/
    index.vue         → Protected page using definePageMeta({ middleware: 'auth' })
stores/
  useCartStore.ts     → Pinia store with items ref, addItem action, total computed

Specific requirements:

  • Full TypeScript — define a Product interface in types/index.ts
  • server/api/products/index.get.ts must use defineEventHandler with typed return
  • pages/products/[id].vue must use useFetch with an explicit key
  • FavoriteButton.vue must be a client-only component (check Nuxt docs for <ClientOnly> wrapper or the client component suffix)
  • Demonstrate auto-imports — no import statements for Vue composables or useFetch

Quick Reference

ConceptNuxt 3Next.jsASP.NET Equivalent
Page filepages/products.vueapp/products/page.tsxPages/Products/Index.cshtml
Dynamic routepages/products/[id].vueapp/products/[id]/page.tsxPages/Products/{id}.cshtml
Default layoutlayouts/default.vueapp/layout.tsx_Layout.cshtml
Named layoutlayouts/dashboard.vue + definePageMetaDirectory-based nestingNested layouts (manual)
Data fetching (SSR)useFetch / useAsyncDataasync Server ComponentOnGetAsync in Razor Page
API endpoint (all methods)server/api/route.tsapp/api/route.ts[ApiController]
API endpoint (GET only)server/api/route.get.tsexport async function GET()[HttpGet]
Throw HTTP errorthrow createError({ statusCode: 404 })return NextResponse.json({}, { status: 404 })return NotFound()
Route middlewaremiddleware/auth.ts + definePageMetamiddleware.ts (global)ActionFilter / [Authorize]
Server middlewareserver/middleware/name.tsmiddleware.ts (Edge only)Program.cs middleware
Global statePinia store in stores/React Context / ZustandSingleton service (DI)
Auto-imported composablesAnything in composables/No auto-imports(no equivalent)
Auto-imported componentsAnything in components/No auto-importsRazor tag helpers (partial)
Framework confignuxt.config.tsnext.config.tsProgram.cs + appsettings.json
Add framework extensionNuxt module in nuxt.config.tsNo module systemNuGet package + builder.Services.Add*
Client-only component<ClientOnly> wrapper or .client.vue suffix'use client' directiveBlazor WebAssembly
SSR state hydrationuseState('key', () => default)Server Component propsViewData / Model binding
Static assetspublic/public/wwwroot/
Processed assetsassets/src/assets/ (convention)Build pipeline assets
Dev servernpm run devnpm run devdotnet watch
Production buildnpm run buildnpm run builddotnet publish

Further Reading

  • Nuxt 3 documentation — Start with “Getting Started” and the “Directory Structure” section. The docs are well-organized and the official source for auto-import behavior
  • Pinia documentation — Covers stores, plugins, and SSR setup in detail
  • Nitro server documentation — The underlying server framework for Nuxt. Relevant when you need advanced server configuration, deployment targets, or server plugins
  • Nuxt module ecosystem — The official module registry. Search by category to find integrated solutions for auth, CMS, image optimization, analytics, etc.
  • Nuxt vs. Next.js — community comparison — Nuxt team’s own comparison, which is worth reading alongside the Next.js team’s perspective
  • unimport — how auto-imports work — The build-time module that powers Nuxt’s auto-import system, if you want to understand the mechanism

State Management: From ViewModels to Stores

For .NET engineers who know: MVVM with ViewModels (WPF/MAUI), Blazor component state and @bind, server-side session, scoped DI services as state containers You’ll learn: How state is managed in React and Vue — from local component state up through global stores — and when each layer is the right tool Time: 15-20 minutes

The .NET Way (What You Already Know)

In .NET, state has a home that is usually determined at design time. In WPF and MAUI, ViewModels hold UI state: bound properties, command handlers, validation state. The ViewModel is instantiated by the DI container (or by the View directly), and two-way binding keeps the UI synchronized. The framework knows where state lives because the binding system enforces it.

// WPF/MAUI ViewModel — state is explicit, framework-managed
public class OrderViewModel : ObservableObject
{
    private Order? _selectedOrder;
    public Order? SelectedOrder
    {
        get => _selectedOrder;
        set => SetProperty(ref _selectedOrder, value);
    }

    private bool _isLoading;
    public bool IsLoading
    {
        get => _isLoading;
        set => SetProperty(ref _isLoading, value);
    }

    [RelayCommand]
    private async Task LoadOrderAsync(int id)
    {
        IsLoading = true;
        SelectedOrder = await _orderService.GetAsync(id);
        IsLoading = false;
    }
}

In Blazor, component state lives in fields or properties inside the component class. Child components receive state through parameters, and signal changes back to parents through EventCallback. For cross-component state, you inject a scoped service — effectively a manually implemented observable store:

// Blazor — service as a cross-component state container
public class CartState
{
    private readonly List<CartItem> _items = new();
    public IReadOnlyList<CartItem> Items => _items.AsReadOnly();
    public event Action? OnChange;

    public void AddItem(CartItem item)
    {
        _items.Add(item);
        NotifyStateChanged();
    }

    private void NotifyStateChanged() => OnChange?.Invoke();
}

In ASP.NET applications you also have server-side session, TempData, and ViewBag for short-lived cross-request state — but those patterns don’t translate to the JS world at all. The server holds no per-user state between requests; everything meaningful lives in the client.

The Modern JS/TS Way

Layer 1: Local Component State

The closest equivalent to a Blazor component’s fields is local component state. In React it is useState; in Vue 3 it is ref and reactive.

// React — useState for local state
import { useState } from "react";

interface Order {
  id: number;
  total: number;
  status: "pending" | "fulfilled" | "cancelled";
}

function OrderCard({ orderId }: { orderId: number }) {
  const [order, setOrder] = useState<Order | null>(null);
  const [isLoading, setIsLoading] = useState(false);
  const [error, setError] = useState<Error | null>(null);

  async function loadOrder() {
    setIsLoading(true);
    setError(null);
    try {
      const data = await orderService.get(orderId);
      setOrder(data);
    } catch (err) {
      setError(err instanceof Error ? err : new Error("Unknown error"));
    } finally {
      setIsLoading(false);
    }
  }

  if (isLoading) return <div>Loading...</div>;
  if (error) return <div>Error: {error.message}</div>;
  if (!order) return <button onClick={loadOrder}>Load Order</button>;

  return <div>{order.id}: {order.total}</div>;
}
// Vue 3 — ref and reactive for local state
import { ref, reactive } from "vue";

interface Order {
  id: number;
  total: number;
  status: "pending" | "fulfilled" | "cancelled";
}

// In a <script setup> block:
const order = ref<Order | null>(null);
const isLoading = ref(false);
const error = ref<Error | null>(null);

async function loadOrder(orderId: number) {
  isLoading.value = true;
  error.value = null;
  try {
    order.value = await orderService.get(orderId);
  } catch (err) {
    error.value = err instanceof Error ? err : new Error("Unknown error");
  } finally {
    isLoading.value = false;
  }
}

Notice isLoading, error, and data managed together — this is the manual version of a pattern that TanStack Query handles automatically. You will write this pattern repeatedly until you reach for TanStack Query, at which point you will delete most of it.

Layer 2: Lifting State Up

When two sibling components need to share state, the state moves to their common parent. This is identical to what happens in MVVM when a shared ViewModel serves multiple views.

// React — parent holds shared state, passes down via props and callbacks
function OrderDashboard() {
  const [selectedOrderId, setSelectedOrderId] = useState<number | null>(null);

  return (
    <div>
      {/* OrderList signals which order was selected */}
      <OrderList
        onSelect={setSelectedOrderId}
        selectedId={selectedOrderId}
      />
      {/* OrderDetail renders the selected order */}
      {selectedOrderId && (
        <OrderDetail orderId={selectedOrderId} />
      )}
    </div>
  );
}

This pattern works well for shallow component trees. When the state needs to travel through several intermediate layers that do not themselves use it — called “prop drilling” — it becomes painful. That is when you reach for Context.

Layer 3: React Context API (Scoped DI for State)

React Context is the closest analogue to Blazor’s scoped service injection. A provider component wraps a subtree and makes state available to any descendant without threading it through props manually.

// React Context — like a scoped service in Blazor
import { createContext, useContext, useState, ReactNode } from "react";

interface User {
  id: number;
  name: string;
  role: "admin" | "viewer";
}

interface AuthContextValue {
  user: User | null;
  login: (credentials: { email: string; password: string }) => Promise<void>;
  logout: () => void;
}

// Create the context with a sensible default
const AuthContext = createContext<AuthContextValue | null>(null);

// Custom hook — throw early if used outside the provider
function useAuth(): AuthContextValue {
  const ctx = useContext(AuthContext);
  if (!ctx) {
    throw new Error("useAuth must be used inside an AuthProvider");
  }
  return ctx;
}

// Provider component — wraps the subtree that needs auth state
function AuthProvider({ children }: { children: ReactNode }) {
  const [user, setUser] = useState<User | null>(null);

  async function login(credentials: { email: string; password: string }) {
    const loggedInUser = await authService.login(credentials);
    setUser(loggedInUser);
  }

  function logout() {
    setUser(null);
    authService.clearSession();
  }

  return (
    <AuthContext.Provider value={{ user, login, logout }}>
      {children}
    </AuthContext.Provider>
  );
}

// Any descendant can consume it
function UserMenu() {
  const { user, logout } = useAuth();
  if (!user) return null;
  return (
    <div>
      <span>{user.name}</span>
      <button onClick={logout}>Sign out</button>
    </div>
  );
}

Context has a significant performance characteristic that trips up .NET engineers: every component that calls useContext(AuthContext) re-renders when any value in the context changes. If your context holds both user and cartItems and the cart changes, every component reading the context re-renders — including those that only need user.

The fix is to split contexts by update frequency and cohesion, or to use a dedicated store library. Do not try to put everything in one context.

Zustand is a lightweight state management library that avoids the Context re-render problem. Each component subscribes to only the specific slice of state it uses — like connecting a specific column from a shared DataTable, not the whole table.

// Zustand store — recommended for React global state
import { create } from "zustand";

interface CartItem {
  productId: number;
  name: string;
  quantity: number;
  unitPrice: number;
}

interface CartStore {
  items: CartItem[];
  isOpen: boolean;
  addItem: (item: CartItem) => void;
  removeItem: (productId: number) => void;
  updateQuantity: (productId: number, quantity: number) => void;
  clearCart: () => void;
  toggleCart: () => void;
  get total(): number;
}

const useCartStore = create<CartStore>((set, get) => ({
  items: [],
  isOpen: false,

  addItem: (item) =>
    set((state) => {
      const existing = state.items.find((i) => i.productId === item.productId);
      if (existing) {
        return {
          items: state.items.map((i) =>
            i.productId === item.productId
              ? { ...i, quantity: i.quantity + item.quantity }
              : i
          ),
        };
      }
      return { items: [...state.items, item] };
    }),

  removeItem: (productId) =>
    set((state) => ({
      items: state.items.filter((i) => i.productId !== productId),
    })),

  updateQuantity: (productId, quantity) =>
    set((state) => ({
      items: state.items.map((i) =>
        i.productId === productId ? { ...i, quantity } : i
      ),
    })),

  clearCart: () => set({ items: [] }),
  toggleCart: () => set((state) => ({ isOpen: !state.isOpen })),

  get total() {
    return get().items.reduce(
      (sum, item) => sum + item.unitPrice * item.quantity,
      0
    );
  },
}));

// Components subscribe only to what they use
function CartBadge() {
  // Re-renders only when items.length changes, not on every cart update
  const count = useCartStore((state) => state.items.length);
  if (count === 0) return null;
  return <span className="badge">{count}</span>;
}

function CartTotal() {
  // Re-renders only when total changes
  const total = useCartStore((state) => state.total);
  return <div>Total: ${total.toFixed(2)}</div>;
}

The selector pattern (useCartStore(state => state.items.length)) is how Zustand avoids unnecessary re-renders. The component only re-renders when that specific derived value changes. This is what Context cannot do without significant additional tooling.

Layer 4 (Vue): Pinia (Official Vue Store)

Pinia is the official Vue 3 state management library, replacing Vuex. It looks and feels like a typed ViewModel:

// Pinia store — Vue's equivalent of a typed ViewModel
import { defineStore } from "pinia";
import { ref, computed } from "vue";

interface CartItem {
  productId: number;
  name: string;
  quantity: number;
  unitPrice: number;
}

export const useCartStore = defineStore("cart", () => {
  // State
  const items = ref<CartItem[]>([]);
  const isOpen = ref(false);

  // Computed (like ViewModel properties)
  const total = computed(() =>
    items.value.reduce(
      (sum, item) => sum + item.unitPrice * item.quantity,
      0
    )
  );

  const itemCount = computed(() => items.value.length);

  // Actions (like ViewModel commands)
  function addItem(item: CartItem) {
    const existing = items.value.find((i) => i.productId === item.productId);
    if (existing) {
      existing.quantity += item.quantity;
    } else {
      items.value.push(item);
    }
  }

  function removeItem(productId: number) {
    items.value = items.value.filter((i) => i.productId !== productId);
  }

  function clearCart() {
    items.value = [];
  }

  function toggleCart() {
    isOpen.value = !isOpen.value;
  }

  return { items, isOpen, total, itemCount, addItem, removeItem, clearCart, toggleCart };
});

// In a Vue component with <script setup>:
// import { useCartStore } from '@/stores/cart'
// const cart = useCartStore()
// cart.addItem({ ... })
// cart.total  // reactive computed value

Pinia stores are typed, DevTools-integrated, and SSR-compatible. They support Vuex-style options API if you prefer, but the composition API form shown above is the modern convention and reads very naturally for .NET engineers familiar with ViewModels.

Layer 5: Redux (Legacy — Know It for Reading Code)

Redux is the pattern you will encounter most often in existing React codebases. It is not the recommended choice for new projects, but you will read it and you need to recognize it.

The Redux pattern: a single immutable global state tree, updated by dispatching actions, transformed by pure reducer functions.

// Redux Toolkit — the modern form (RTK), not the old verbose form
import { createSlice, configureStore, PayloadAction } from "@reduxjs/toolkit";

interface CartItem {
  productId: number;
  name: string;
  quantity: number;
  unitPrice: number;
}

interface CartState {
  items: CartItem[];
  isOpen: boolean;
}

const cartSlice = createSlice({
  name: "cart",
  initialState: { items: [], isOpen: false } as CartState,
  reducers: {
    addItem(state, action: PayloadAction<CartItem>) {
      // RTK uses Immer under the hood — mutable syntax, immutable result
      const existing = state.items.find(
        (i) => i.productId === action.payload.productId
      );
      if (existing) {
        existing.quantity += action.payload.quantity;
      } else {
        state.items.push(action.payload);
      }
    },
    removeItem(state, action: PayloadAction<number>) {
      state.items = state.items.filter(
        (i) => i.productId !== action.payload
      );
    },
    clearCart(state) {
      state.items = [];
    },
  },
});

export const { addItem, removeItem, clearCart } = cartSlice.actions;

const store = configureStore({ reducer: { cart: cartSlice.reducer } });

// Component usage
import { useSelector, useDispatch } from "react-redux";

function CartBadge() {
  const count = useSelector((state: RootState) => state.cart.items.length);
  const dispatch = useDispatch();
  // dispatch(addItem({ ... }))
  return <span>{count}</span>;
}

Redux’s verbosity exists for a reason: the strict unidirectional data flow makes large-scale applications easier to debug. Redux DevTools can replay every action and show you exactly how state evolved. For teams of 10+ working on complex apps, that traceability has genuine value. For most apps, it is overhead.

Layer 6: TanStack Query (Server State — the Game-Changer)

This is the most important concept in this article. The single biggest mistake teams make is putting server data in global stores.

The mental model shift: server state is not the same as UI state. Server state lives on the server. What you have in the client is a cache of server data. Caches have different requirements than UI state: they go stale, they need to be invalidated, they need to be re-fetched, they need optimistic updates.

TanStack Query (formerly React Query) treats server state as exactly that — a cache — and handles all of it automatically.

// Without TanStack Query — the manual approach you would write in .NET style
function OrderList({ userId }: { userId: number }) {
  const [orders, setOrders] = useState<Order[]>([]);
  const [isLoading, setIsLoading] = useState(false);
  const [error, setError] = useState<Error | null>(null);

  useEffect(() => {
    setIsLoading(true);
    orderService
      .getByUser(userId)
      .then(setOrders)
      .catch(setError)
      .finally(() => setIsLoading(false));
  }, [userId]);

  // ... 30 more lines for pagination, refetch on focus, cache invalidation
}
// With TanStack Query — 3 lines instead of 30, plus automatic caching
import { useQuery, useMutation, useQueryClient } from "@tanstack/react-query";

function OrderList({ userId }: { userId: number }) {
  const {
    data: orders,
    isLoading,
    error,
    refetch,
  } = useQuery({
    queryKey: ["orders", userId],  // cache key — same key = same cache
    queryFn: () => orderService.getByUser(userId),
    staleTime: 30_000,             // consider fresh for 30 seconds
    refetchOnWindowFocus: true,    // re-fetch when tab regains focus
  });

  if (isLoading) return <div>Loading...</div>;
  if (error) return <div>Error loading orders</div>;

  return (
    <ul>
      {orders?.map((order) => (
        <li key={order.id}>{order.id}: ${order.total}</li>
      ))}
    </ul>
  );
}

// Mutations with automatic cache invalidation
function CreateOrderForm({ userId }: { userId: number }) {
  const queryClient = useQueryClient();

  const createOrder = useMutation({
    mutationFn: (data: CreateOrderInput) => orderService.create(data),
    onSuccess: () => {
      // Invalidate the orders cache — next render will re-fetch
      queryClient.invalidateQueries({ queryKey: ["orders", userId] });
    },
  });

  return (
    <form onSubmit={(e) => {
      e.preventDefault();
      createOrder.mutate({ userId, items: [] });
    }}>
      <button type="submit" disabled={createOrder.isPending}>
        {createOrder.isPending ? "Creating..." : "Create Order"}
      </button>
    </form>
  );
}

TanStack Query gives you: automatic caching, background re-fetching, stale-while-revalidate, loading and error states, request deduplication (if two components ask for the same query key simultaneously, only one network request fires), pagination helpers, and infinite scroll support.

The Vue equivalent is @tanstack/vue-query, with identical semantics and a composition-style API.

When to Use What

The rule is simpler than you might expect:

  • Server data (anything from an API): TanStack Query
  • Form state: keep it local, or use React Hook Form / VeeValidate for complex forms
  • Ephemeral UI state (modal open, selected tab, accordion open): local useState / ref
  • Shared UI state that multiple components across the tree need (current user, theme, cart, notification count): Zustand (React) or Pinia (Vue)
  • Do not use global stores for server data. TanStack Query is the store for server data.

The most overused pattern in React: fetching data in useEffect, storing it in global Redux state, selecting it in every component that needs it. TanStack Query replaces this entirely.

Key Differences

.NET PatternJS/TS EquivalentNotes
WPF/MAUI ViewModelZustand store / Pinia storeJS stores are not bound to specific views
Blazor component fieldsuseState (React) / ref (Vue)Re-renders on every change
Blazor EventCallbackCallback props / emitsParent passes function down; child calls it
Blazor scoped serviceReact ContextContext re-renders all consumers on change
ObservableObject.SetPropertyZustand set, Pinia direct mutationReactivity is tracked by the framework
INotifyPropertyChangedBuilt into React state / Vue refsNo interface to implement
async ViewModel load commandTanStack Query useQueryCaching, loading states, and re-fetch included
HttpClient + storeTanStack QueryDo not store API data in Redux/Zustand
Server-side SessionNot applicableClient manages its own state entirely

Gotchas for .NET Engineers

Gotcha 1: Mutating State Directly Does Nothing in React

React detects state changes by reference equality. If you mutate an object in place, React sees the same reference and skips the re-render.

// BROKEN — mutating the existing array; React sees the same reference
function addOrder(newOrder: Order) {
  orders.push(newOrder); // orders is still the same array reference
  setOrders(orders);     // React: "same array as before, skip re-render"
}

// CORRECT — new array, new reference, React re-renders
function addOrder(newOrder: Order) {
  setOrders([...orders, newOrder]);
  // Or: setOrders(prev => [...prev, newOrder])
}

// BROKEN — mutating a nested object
function updateOrderStatus(id: number, status: string) {
  const order = orders.find(o => o.id === id);
  if (order) {
    order.status = status; // direct mutation
    setOrders(orders);     // same reference, no re-render
  }
}

// CORRECT — produce a new array with a new object for the changed item
function updateOrderStatus(id: number, status: string) {
  setOrders(orders.map(o => o.id === id ? { ...o, status } : o));
}

Vue 3 uses a Proxy-based reactivity system (ref and reactive) that does track direct mutations — Vue is closer to WPF/MAUI in this respect. In React, always return new objects and arrays. Redux Toolkit’s reducers use Immer internally, which lets you write mutating syntax that is converted to immutable updates behind the scenes.

Gotcha 2: useEffect Is Not OnInitialized — It Re-Runs

.NET engineers reach for useEffect as a lifecycle hook equivalent to OnInitialized in Blazor or the ViewModel constructor. The dependency array controls when it runs, and this is frequently misconfigured.

// BROKEN — missing userId in dependency array
// Fetches once on mount, never re-fetches when userId changes
useEffect(() => {
  fetchOrders(userId).then(setOrders);
}, []); // empty array = run once only

// BROKEN — no dependency array at all
// Runs after every single render — infinite loop if setOrders triggers a render
useEffect(() => {
  fetchOrders(userId).then(setOrders);
});

// CORRECT — re-fetches whenever userId changes
useEffect(() => {
  let cancelled = false;
  fetchOrders(userId).then(data => {
    if (!cancelled) setOrders(data);
  });
  // Cleanup function runs before the next effect and on unmount
  return () => { cancelled = true; };
}, [userId]);

The cleanup function is important: if userId changes while a fetch is in-flight, the stale fetch should not update state. This is the pattern TanStack Query handles automatically. If you find yourself writing useEffect for data fetching with cleanup, cancellation, and error handling, reach for TanStack Query instead.

Gotcha 3: Context Does Not Replace a Store — It Has Different Performance Characteristics

A common pattern seen in codebases that avoided Redux “because it’s too complex” is one enormous Context provider containing all application state. This creates a performance problem that is hard to diagnose.

// PROBLEMATIC — one big context causes every consumer to re-render
// on every state change, even unrelated ones
const AppContext = createContext<{
  user: User | null;
  cart: CartItem[];
  theme: "light" | "dark";
  notifications: Notification[];
  selectedOrderId: number | null;
}>(...);

// When notifications change, UserMenu re-renders even though it only uses user.
// When cart changes, ThemeToggle re-renders even though it only uses theme.
function UserMenu() {
  const { user } = useContext(AppContext); // re-renders on every context change
  return <div>{user?.name}</div>;
}
// CORRECT — split by update domain; or use Zustand with selectors
const UserContext = createContext<User | null>(null);
const CartContext = createContext<CartStore | null>(null);
const ThemeContext = createContext<"light" | "dark">("light");

// Each context updates independently; consumers only re-render
// when their specific context changes

If you need fine-grained subscriptions from a single state object, use Zustand. Its selector-based subscription model was designed for exactly this problem.

Gotcha 4: Global State Is Often the Wrong Tool for Server Data

The .NET pattern of loading data in a service, caching it, and pushing updates to subscribers via events works well on the server. Replicating it in the client with Redux leads to stores full of users: User[], orders: Order[], and complex loading/error flag management — all of which TanStack Query handles automatically and correctly.

// Redux approach — you're reinventing a cache
const usersSlice = createSlice({
  name: "users",
  initialState: {
    entities: {} as Record<number, User>,
    loadingIds: [] as number[],
    errorIds: [] as number[],
  },
  reducers: {
    fetchStart(state, action: PayloadAction<number>) {
      state.loadingIds.push(action.payload);
    },
    fetchSuccess(state, action: PayloadAction<User>) {
      state.entities[action.payload.id] = action.payload;
      state.loadingIds = state.loadingIds.filter(id => id !== action.payload.id);
    },
    // ... fetchError, invalidate, etc.
  },
});

// TanStack Query approach — the cache is handled for you
function useUser(userId: number) {
  return useQuery({
    queryKey: ["users", userId],
    queryFn: () => userService.getById(userId),
    staleTime: 60_000,
  });
}
// Loading state, error state, caching, deduplication, background refresh: included

The rule of thumb: if the data lives on a server and needs to be fetched, it belongs in TanStack Query. If it is purely client-side state (user preferences not yet saved, UI state, session-only selections), it belongs in a store or local state.

Gotcha 5: useState Updates Are Asynchronous

In Blazor, setting a field and calling StateHasChanged() synchronously queues a re-render. In React, setState calls are batched and asynchronous within event handlers — the new state is not immediately visible.

// BROKEN — reading state immediately after setting it
function handleSubmit() {
  setCount(count + 1);
  console.log(count); // Logs the OLD value — state hasn't updated yet

  setCount(count + 1); // This is count + 1, same as above
  setCount(count + 1); // Also count + 1, not count + 3
}

// CORRECT — use the functional form when new state depends on old state
function handleSubmit() {
  setCount(prev => prev + 1);
  setCount(prev => prev + 1);
  setCount(prev => prev + 1);
  // Now all three increments apply correctly
}

// CORRECT — read state in the render function, not after setting it
// React will re-render the component after state updates

React 18 batches state updates across async boundaries too (automatic batching), so multiple setState calls in an event handler or after await result in a single re-render.

Hands-On Exercise

Build a simple order management dashboard that exercises each layer of state management.

Setup:

// types.ts
export interface Order {
  id: number;
  customerId: number;
  total: number;
  status: "pending" | "processing" | "fulfilled" | "cancelled";
  createdAt: string;
}

// Simulated API (replace with real fetch calls)
export const orderApi = {
  getAll: async (): Promise<Order[]> => {
    await new Promise(r => setTimeout(r, 300));
    return [
      { id: 1, customerId: 1, total: 99.99, status: "pending", createdAt: "2026-02-18" },
      { id: 2, customerId: 2, total: 249.50, status: "processing", createdAt: "2026-02-17" },
      { id: 3, customerId: 1, total: 49.00, status: "fulfilled", createdAt: "2026-02-16" },
    ];
  },

  updateStatus: async (id: number, status: Order["status"]): Promise<Order> => {
    await new Promise(r => setTimeout(r, 200));
    return { id, customerId: 1, total: 99.99, status, createdAt: "2026-02-18" };
  },
};

Exercise 1 — TanStack Query for server state:

// Set up TanStack Query in App.tsx:
// import { QueryClient, QueryClientProvider } from "@tanstack/react-query";
// const queryClient = new QueryClient();
// Wrap your app: <QueryClientProvider client={queryClient}>

function OrderList() {
  // TODO: Use useQuery to fetch orders from orderApi.getAll()
  // Show loading state, error state, and the list of orders
  // Bonus: Add a "Refetch" button that calls refetch()
}

function UpdateStatusButton({ order }: { order: Order }) {
  const queryClient = useQueryClient();
  // TODO: Use useMutation to call orderApi.updateStatus()
  // On success, invalidate the ["orders"] query key
  // Show a loading state on the button while the mutation is pending
}

Exercise 2 — Zustand for UI state:

// TODO: Create a Zustand store for dashboard UI state
// It should track:
//   - selectedOrderId: number | null
//   - filterStatus: Order["status"] | "all"
//   - isSidebarOpen: boolean
//
// Create three components that each subscribe to only the slice they need:
//   - OrderFilter: reads filterStatus, updates it via a select element
//   - SidebarToggle: reads isSidebarOpen, toggles it
//   - OrderDetail: reads selectedOrderId, displays the selected order's details

Exercise 3 — Context for auth:

// TODO: Create an AuthContext that provides:
//   - currentUser: { id: number; name: string; role: "admin" | "viewer" } | null
//   - login(userId: number): void  (just set a mock user, no real auth)
//   - logout(): void
//
// Add a guard: if the user's role is "viewer", the UpdateStatusButton should be disabled
// The guard should read from AuthContext, not from props

Quick Reference

ScenarioReactVueNotes
Local component stateuseStateref / reactiveFirst choice for any state
Derived state from local stateuseMemocomputedRecalculates only when inputs change
Shared state between siblingsLift to parentLift to parentPass via props and callbacks/emits
Cross-tree shared UI stateZustandPiniaAvoid Context for high-frequency updates
Authentication / current userContext or ZustandPiniaLow-frequency changes; Context is fine
Server data (API calls)TanStack QueryTanStack Vue QueryNot a global store — it is a cache
Form stateReact Hook FormVeeValidateDo not use global store for form fields
URL-derived stateuseSearchParamsuseRoute / useRouterShareable, bookmarkable state
Persisted client stateZustand + persist middlewarePinia + pinia-plugin-persistedstateSyncs to localStorage automatically
Global store (legacy)Redux ToolkitPiniaRedux for existing codebases only
Optimistic updatesTanStack Query onMutateTanStack Vue QueryUpdate cache before server confirms

Further Reading

  • TanStack Query Documentation — The primary reference. The “Guides and Concepts” section, particularly “Query Invalidation” and “Optimistic Updates”, covers 90% of real-world usage.
  • Zustand GitHub — The README is short and complete. Read the entire thing.
  • Pinia Documentation — Official Vue store documentation. The “Core Concepts” section maps directly to ViewModel concepts.
  • Thinking in React — The official React guide on identifying where state should live. Short and precise.
  • TkDodo’s Blog: Practical React Query — The best non-official resource on TanStack Query patterns. The series on “React Query and TypeScript” is particularly relevant.

Styling: CSS-in-JS, Tailwind, and CSS Modules

For .NET engineers who know: Plain CSS, Bootstrap, or a component library like Telerik/DevExpress in web projects; Blazor component isolation via .razor.css files You’ll learn: The major styling approaches in modern JS projects, why Tailwind is different from Bootstrap, and how to use it effectively Time: 10-15 minutes

The .NET Way (What You Already Know)

In .NET web projects, styling typically means one of a few approaches. A plain CSS file linked in _Layout.cshtml or wwwroot. Bootstrap loaded via CDN or LibMan, giving you utility classes and components. A third-party component library (Telerik UI, DevExpress, Syncfusion) that ships pre-styled components. Or, in Blazor, per-component isolation via a sidecar .razor.css file that Blazor scopes automatically.

/* MyComponent.razor.css — scoped to MyComponent automatically by Blazor */
.order-card {
  border: 1px solid #dee2e6;
  border-radius: 4px;
  padding: 16px;
}

.order-card .title {
  font-weight: 600;
  font-size: 1.125rem;
}
<!-- Blazor component — styles above apply only here -->
<div class="order-card">
  <h2 class="title">@Order.Id</h2>
</div>

The .NET ecosystem has a clear answer to “how do I style this”: either global CSS that you manage carefully to avoid conflicts, or isolation files that Blazor scopes for you. The JS ecosystem does not have one answer — it has five, each with different trade-offs. Understanding all five is necessary for reading existing codebases; knowing which one to use for new code matters for your team’s velocity.

The Modern JS/TS Way

Option 1: Plain CSS (Still Valid)

Plain .css files still work. Modern bundlers (Vite, webpack) import them directly:

/* orders.css */
.order-card {
  border: 1px solid #dee2e6;
  border-radius: 4px;
  padding: 16px;
}
// React component
import "./orders.css";

function OrderCard({ order }: { order: Order }) {
  return <div className="order-card">{order.id}</div>;
}

The problem with plain CSS in component-based apps is the same one that drove the creation of CSS Modules: all class names are global. .order-card in orders.css and .order-card in invoices.css are the same class. At scale, naming conventions (BEM, SMACSS) help, but they rely on discipline rather than tooling.

Option 2: CSS Modules (Scoped CSS)

CSS Modules solve the global namespace problem at the build step. Class names in a .module.css file are locally scoped by the bundler — the actual class names in the output are generated hashes that cannot conflict.

/* OrderCard.module.css */
.card {
  border: 1px solid #dee2e6;
  border-radius: 4px;
  padding: 16px;
}

.title {
  font-weight: 600;
  font-size: 1.125rem;
}

.statusBadge {
  display: inline-flex;
  align-items: center;
  padding: 2px 8px;
  border-radius: 9999px;
  font-size: 0.75rem;
}

.statusBadge.pending {
  background-color: #fef3c7;
  color: #92400e;
}

.statusBadge.fulfilled {
  background-color: #d1fae5;
  color: #065f46;
}
// OrderCard.tsx
import styles from "./OrderCard.module.css";

interface Order {
  id: number;
  total: number;
  status: "pending" | "processing" | "fulfilled" | "cancelled";
}

function OrderCard({ order }: { order: Order }) {
  return (
    <div className={styles.card}>
      <h2 className={styles.title}>Order #{order.id}</h2>
      <span className={`${styles.statusBadge} ${styles[order.status] ?? ""}`}>
        {order.status}
      </span>
      <p>${order.total.toFixed(2)}</p>
    </div>
  );
}

The styles.card reference becomes something like _card_1a2b3c in the output — unique per file. Two different component files can both have .card without conflict. This is the direct JS equivalent of Blazor’s .razor.css isolation, implemented at the bundler level.

CSS Modules have full TypeScript support via the typed-css-modules tool or the typescript-plugin-css-modules IDE plugin, which generates type declarations so styles.cardTypo fails at compile time.

Tailwind is a utility-first CSS framework. Instead of writing CSS classes that describe components (.order-card, .status-badge), you compose small single-purpose utility classes directly in your HTML.

// The same OrderCard, using Tailwind utilities
function OrderCard({ order }: { order: Order }) {
  const statusColors: Record<Order["status"], string> = {
    pending: "bg-amber-100 text-amber-800",
    processing: "bg-blue-100 text-blue-800",
    fulfilled: "bg-emerald-100 text-emerald-800",
    cancelled: "bg-red-100 text-red-800",
  };

  return (
    <div className="border border-gray-200 rounded-md p-4">
      <h2 className="font-semibold text-lg">Order #{order.id}</h2>
      <span className={`inline-flex items-center px-2 py-0.5 rounded-full text-xs ${statusColors[order.status]}`}>
        {order.status}
      </span>
      <p className="text-gray-700">${order.total.toFixed(2)}</p>
    </div>
  );
}

At first glance this looks like inline styles — it is not. Tailwind is a stylesheet at build time. The utility classes map to fixed design tokens (spacing scale, color palette, typography scale) defined in your configuration. p-4 is always padding: 1rem. text-lg is always font-size: 1.125rem; line-height: 1.75rem. They come from a design system, not from ad hoc values.

How Tailwind Works (Different from Bootstrap)

Bootstrap ships a pre-built CSS file with component classes (.btn, .card, .navbar). You use those classes as-is or override them. The full stylesheet is large even when trimmed.

Tailwind ships nothing pre-built. The build tool scans your source files for utility class names and generates a stylesheet containing only the classes you actually used. A production Tailwind stylesheet for a large app typically weighs 5-15KB. A full Bootstrap stylesheet is ~150KB.

# tailwind.config.ts — tells Tailwind where to look for class names
import type { Config } from "tailwindcss";

export default {
  content: [
    "./index.html",
    "./src/**/*.{ts,tsx,vue}",
  ],
  theme: {
    extend: {
      colors: {
        brand: {
          50: "#eff6ff",
          500: "#3b82f6",
          900: "#1e3a8a",
        },
      },
      fontFamily: {
        sans: ["Inter", "system-ui", "sans-serif"],
      },
    },
  },
  plugins: [],
} satisfies Config;

The content array is critical: Tailwind scans these files for class name strings. If a class name is constructed dynamically in a way Tailwind cannot see at build time, it will not be included in the output.

Responsive Utilities

Tailwind uses a mobile-first breakpoint system. Prefix any utility with a breakpoint to apply it at that screen size and above:

function ProductGrid() {
  return (
    <div
      className="
        grid
        grid-cols-1
        sm:grid-cols-2
        lg:grid-cols-3
        xl:grid-cols-4
        gap-4
        p-4
        lg:p-8
      "
    >
      {/* 1 column on mobile, 2 on small screens, 3 on large, 4 on XL */}
    </div>
  );
}

Breakpoints: sm (640px), md (768px), lg (1024px), xl (1280px), 2xl (1536px). All are min-width (mobile-first).

Dark Mode

// tailwind.config.ts — enable class-based dark mode
export default {
  darkMode: "class", // toggle by adding .dark to <html>
  // ...
};

// Component — dark: prefix applies in dark mode
function Card({ children }: { children: React.ReactNode }) {
  return (
    <div className="bg-white dark:bg-gray-800 text-gray-900 dark:text-gray-100 rounded-lg p-6 shadow">
      {children}
    </div>
  );
}

Toggle dark mode by adding or removing the dark class on the root html element. Pair this with a Zustand/Pinia store that persists the preference to localStorage.

The @apply Escape Hatch

When you need to extract a repeated pattern into a named class — for a design system or for generated content that Tailwind cannot scan — use @apply:

/* globals.css or a component stylesheet */
@layer components {
  .btn-primary {
    @apply inline-flex items-center px-4 py-2 rounded-md
           bg-brand-500 text-white font-medium text-sm
           hover:bg-brand-600
           focus:outline-none focus:ring-2 focus:ring-brand-500 focus:ring-offset-2
           disabled:opacity-50 disabled:cursor-not-allowed
           transition-colors duration-150;
  }
}

Use @apply sparingly. It reintroduces the indirection that Tailwind was designed to eliminate. Appropriate uses: base element styles (forms, typography, code blocks), third-party content you cannot annotate with classes, and established design system tokens you want to enforce by name. Do not use @apply as an escape hatch because utility classes “feel wrong” — that feeling passes.

Option 4: CSS-in-JS (styled-components / Emotion)

CSS-in-JS libraries let you write CSS inside TypeScript files as tagged template literals or object notation, scoped to a component automatically.

// styled-components
import styled from "styled-components";

interface CardProps {
  $variant?: "default" | "highlighted";
}

const Card = styled.div<CardProps>`
  border: 1px solid ${(props) => props.$variant === "highlighted" ? "#3b82f6" : "#dee2e6"};
  border-radius: 4px;
  padding: 16px;
  background: ${(props) => props.$variant === "highlighted" ? "#eff6ff" : "white"};
`;

const Title = styled.h2`
  font-weight: 600;
  font-size: 1.125rem;
  margin: 0;
`;

// Usage
function OrderCard({ order, highlighted }: { order: Order; highlighted?: boolean }) {
  return (
    <Card $variant={highlighted ? "highlighted" : "default"}>
      <Title>Order #{order.id}</Title>
    </Card>
  );
}

CSS-in-JS has genuine strengths: styles are collocated with components, TypeScript types flow into style props naturally, conditional styles are first-class. The trade-offs:

  • Runtime cost: styled-components and Emotion inject styles at runtime, adding JavaScript bundle weight and CPU cost on each render. This matters on low-end devices.
  • No static extraction without configuration: by default, styles are not in a separate CSS file; they are generated in JavaScript at render time.
  • Server rendering complexity: SSR requires additional setup to avoid style flash.

You will encounter styled-components in many existing React codebases (particularly those started between 2017 and 2022). Read it fluently. Do not start new projects with it unless the team has strong existing expertise and a specific reason to prefer it over Tailwind.

Option 5: Component Libraries (Covered in Article 3.8)

Libraries like shadcn/ui, Material UI, and Chakra UI provide pre-built components. Some are Tailwind-based, some have their own styling systems. Article 3.8 covers the component library ecosystem in depth.

Key Differences

.NET PatternJS/TS EquivalentNotes
Global CSS in wwwrootglobals.css imported in main.tsxSame concept
Blazor .razor.css isolationCSS Modules (.module.css)Both scope class names per component
Bootstrap component classesTailwind utilitiesNo pre-built components; compose utilities
Bootstrap override via SCSStailwind.config.ts theme.extendExtend the design system, do not override it
Inline styles (style="...")style={{ }} in JSXSame escapes to computed values; no utilities
CSS class naming discipline (BEM)CSS Modules, TailwindTooling enforces scoping instead
Telerik/DevExpress component themesComponent library design tokensEach library has its own theming API
SCSS variablesTailwind design tokens in configCSS custom properties also widely used
SCSS @mixinTailwind @apply or component abstractionUse sparingly

Gotchas for .NET Engineers

Gotcha 1: Dynamic Class Names Are Invisible to Tailwind

Tailwind’s build step scans source files as text. It does not execute your code. If you construct class names by concatenating strings at runtime, Tailwind cannot see the complete class name and will not include it in the output.

// BROKEN — Tailwind sees "bg-${color}-500", not "bg-red-500" or "bg-green-500"
function StatusDot({ color }: { color: string }) {
  return <div className={`bg-${color}-500 w-3 h-3 rounded-full`} />;
}

// CORRECT — always use complete, unbroken class name strings
const colorMap: Record<string, string> = {
  red: "bg-red-500",
  green: "bg-green-500",
  blue: "bg-blue-500",
  amber: "bg-amber-500",
};

function StatusDot({ color }: { color: keyof typeof colorMap }) {
  return <div className={`${colorMap[color]} w-3 h-3 rounded-full`} />;
}

Complete class name strings can appear anywhere in the codebase — in TypeScript, JSX, Vue templates, JSON files, whatever is listed in content. The rule is: the complete class name must appear as a string literal somewhere in a file Tailwind can scan.

Gotcha 2: Specificity Works Differently Than You Expect

In Bootstrap, you override component styles by writing more specific CSS selectors. In Tailwind, all utilities have the same low specificity — one class each. The last class in the cascade wins when utilities conflict.

This means utility ordering in a string does not matter for most cases, but it does matter when you compose utilities from a base set and try to override one:

// The issue: you want to extend a base button but override just the color
function Button({
  className,
  children,
}: {
  className?: string;
  children: React.ReactNode;
}) {
  return (
    <button
      className={`bg-blue-500 text-white px-4 py-2 rounded ${className}`}
    >
      {children}
    </button>
  );
}

// This does NOT reliably override bg-blue-500 with bg-red-500
// Both classes appear in the stylesheet; which one "wins" depends on
// their source order in the generated CSS, not on where they appear in the string
<Button className="bg-red-500">Danger</Button>

The correct solution is the clsx + tailwind-merge pattern:

import { clsx, type ClassValue } from "clsx";
import { twMerge } from "tailwind-merge";

// Combine and merge Tailwind classes, resolving conflicts intelligently
function cn(...inputs: ClassValue[]): string {
  return twMerge(clsx(inputs));
}

// Now overrides work correctly — twMerge knows bg-red-500 and bg-blue-500
// are in the same "group" and keeps only the last one
function Button({ className, children }: { className?: string; children: React.ReactNode }) {
  return (
    <button className={cn("bg-blue-500 text-white px-4 py-2 rounded", className)}>
      {children}
    </button>
  );
}

<Button className="bg-red-500">Danger</Button> // bg-red-500 wins

The cn utility function is used in essentially every Tailwind-based component library and design system. You will see it everywhere. Install clsx and tailwind-merge at the start of every project.

Gotcha 3: CSS Modules Object Access Is Not Type-Safe Without Extra Setup

CSS Modules import as a plain JavaScript object at runtime. Without additional tooling, TypeScript types the entire styles import as Record<string, string>, which means typos compile silently:

import styles from "./OrderCard.module.css";

// TypeScript allows this even if .typo does not exist in the CSS file
<div className={styles.typo}>...</div>
// styles.typo === undefined at runtime; no class is applied; no error thrown

Fix this with typescript-plugin-css-modules in your tsconfig.json plugins and IDE configuration, or with typed-css-modules as a build step that generates .d.ts files from your .module.css files. Most teams using CSS Modules set this up from the start.

Gotcha 4: The Tailwind Purge Config Must Cover Every File That Uses Tailwind Classes

This bit teams transitioning from a manual stylesheet approach. If you add a new directory of components and forget to add it to content in tailwind.config.ts, those components will have no Tailwind styles in production (where unused classes are purged). In development mode, Tailwind includes all classes, which is why you miss the problem until you deploy.

// tailwind.config.ts — cover every location where Tailwind classes appear
export default {
  content: [
    "./index.html",
    "./src/**/*.{ts,tsx}",           // React
    "./src/**/*.{vue,ts}",           // Vue
    "./src/**/*.{svelte,ts}",        // Svelte
    "./node_modules/your-lib/**/*.js", // If a dependency uses Tailwind classes
  ],
  // ...
};

Any file path pattern added to the project (a features/ directory, a packages/ directory in a monorepo) needs a corresponding entry in content. Treat this like adding a new project reference — easy to forget, immediately visible when you deploy.

Gotcha 5: styled-components Props Convention Changed in v6

If you read existing code using styled-components and see prop names without a $ prefix used to pass styling props, it is pre-v6 code. In styled-components v6, props that are only for styling (not forwarded to the DOM element) must be prefixed with $ to prevent React’s “unknown prop” warning.

// styled-components v5 — prop forwarded to DOM, React warns about unknown prop
const Card = styled.div<{ variant: string }>`
  border: ${(p) => p.variant === "error" ? "1px solid red" : "1px solid gray"};
`;
<Card variant="error">...</Card> // React warns: "variant" is not a valid DOM attribute

// styled-components v6 — $ prefix marks it as a transient prop, not forwarded to DOM
const Card = styled.div<{ $variant: string }>`
  border: ${(p) => p.$variant === "error" ? "1px solid red" : "1px solid gray"};
`;
<Card $variant="error">...</Card> // No warning; $ is stripped before forwarding

When reading old styled-components code, non-$-prefixed props passed to styled components were either intentional DOM attributes or unintentional warnings. When writing new styled-components code, prefix all styling-only props with $.

Hands-On Exercise

Build a status badge component using each of the three main approaches to compare them.

Specification: A StatusBadge component that:

  • Accepts a status prop: "pending" | "processing" | "fulfilled" | "cancelled"
  • Renders a pill-shaped badge with an appropriate color per status
  • Colors: pending=amber, processing=blue, fulfilled=green, cancelled=red
  • Typography: small, semibold, uppercase

Exercise 1 — Tailwind approach:

// StatusBadge.tsx — implement using only Tailwind utilities
interface StatusBadgeProps {
  status: "pending" | "processing" | "fulfilled" | "cancelled";
}

// The cn() helper:
import { clsx } from "clsx";
import { twMerge } from "tailwind-merge";
const cn = (...inputs: Parameters<typeof clsx>) => twMerge(clsx(inputs));

export function StatusBadge({ status }: StatusBadgeProps) {
  // TODO: implement using Tailwind
  // Hint: define a statusStyles object mapping status -> class string
  // Use cn() to combine the base classes with the status-specific classes
}

Exercise 2 — CSS Modules approach:

/* StatusBadge.module.css — implement the same visual result with CSS */
.badge {
  /* base styles */
}

.pending { /* amber */ }
.processing { /* blue */ }
.fulfilled { /* green */ }
.cancelled { /* red */ }
// StatusBadge.tsx with CSS Modules
import styles from "./StatusBadge.module.css";

export function StatusBadge({ status }: StatusBadgeProps) {
  // TODO: implement using CSS Modules
}

Exercise 3 — Responsive extension:

Extend the Tailwind version to:

  • On mobile: show just the colored dot, no text
  • On sm: and above: show the full text badge
// Hint: use Tailwind's responsive prefixes and conditional rendering
// sm:hidden / hidden sm:block for toggling visibility

Reference solution for Exercise 1:

export function StatusBadge({ status }: StatusBadgeProps) {
  const statusStyles: Record<StatusBadgeProps["status"], string> = {
    pending: "bg-amber-100 text-amber-800",
    processing: "bg-blue-100 text-blue-800",
    fulfilled: "bg-emerald-100 text-emerald-800",
    cancelled: "bg-red-100 text-red-800",
  };

  return (
    <span
      className={cn(
        "inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-semibold uppercase tracking-wide",
        statusStyles[status]
      )}
    >
      {status}
    </span>
  );
}

Quick Reference

ApproachBest ForAvoid When
Plain CSSSimple pages, prototype, no component systemComponent library at any scale
CSS ModulesComponent-level scoping without Tailwind overheadTeam already uses Tailwind consistently
Tailwind CSSNew projects, design system, most React/Vue workStrong existing CSS/SCSS architecture
styled-components / EmotionReading legacy code, CSS-in-JS team preferencePerformance-critical SSR, new projects
BootstrapRapid prototype with minimal designNeed design system flexibility
Tailwind UtilityCSS EquivalentNotes
p-4padding: 1rem4 = 4 × 0.25rem = 1rem
px-4 py-2padding: 0.5rem 1remx = horizontal, y = vertical
m-automargin: autoCentering trick
flex items-center justify-betweenflexbox row, vertically centered, space-betweenCommon layout pattern
grid grid-cols-3 gap-4display:grid; grid-template-columns: repeat(3, 1fr); gap: 1rem
text-smfont-size: 0.875rem; line-height: 1.25remTypography scale
font-semiboldfont-weight: 600
rounded-mdborder-radius: 0.375rem
shadow-smSmall box shadow
hover:bg-blue-600:hover { background-color: ... }State prefix
focus:ring-2:focus { box-shadow: 0 0 0 2px ... }Accessibility outline
sm:hidden@media (min-width: 640px) { display: none }Responsive prefix
dark:bg-gray-800@media (prefers-color-scheme: dark) { ... }Dark mode prefix
disabled:opacity-50:disabled { opacity: 0.5 }Disabled state

Further Reading

  • Tailwind CSS Documentation — The official docs are excellent. Start with “Utility-First Fundamentals” and “Responsive Design”.
  • CSS Modules Specification — Short specification document explaining the scoping rules.
  • tailwind-merge — The library that resolves Tailwind class conflicts. Read the README for the rules it applies.
  • Tailwind CSS IntelliSense — VS Code extension. Autocomplete, hover previews, and linting for Tailwind. Install before you write a single line.
  • styled-components Documentation — Reference for reading existing code. The “API Reference” section covers the $-prefix transient props change.

Component Libraries and Design Systems

For .NET engineers who know: Telerik UI for Blazor, DevExpress, MudBlazor, or Syncfusion — component libraries where you install a package and get styled, interactive widgets You’ll learn: The headless component philosophy, how shadcn/ui works as our React choice, the Vue ecosystem options, and when to build vs. buy Time: 10-15 minutes

The .NET Way (What You Already Know)

In .NET UI development, a component library typically means one package that delivers both behavior and visual styling. You install Telerik or MudBlazor, add a theme, and you have a data grid with sorting, filtering, virtualization, and a consistent visual design. The library owns the look. Customization happens through theming APIs, CSS variable overrides, or — when you need to go further — fighting the library’s specificity.

// MudBlazor — behavior and styling bundled together
<MudDataGrid T="Order" Items="@orders" Filterable="true" SortMode="SortMode.Multiple">
    <Columns>
        <PropertyColumn Property="x => x.Id" Title="Order #" />
        <PropertyColumn Property="x => x.Total" Title="Total" Format="C" />
        <PropertyColumn Property="x => x.Status" Title="Status" />
    </Columns>
</MudDataGrid>

The trade-off is familiar: you move fast when the library’s design matches your requirements, and you slow down considerably when it does not. Overriding a Telerik theme for a specific design system can produce more CSS than you would have written from scratch.

The Modern JS/TS Way

The Headless Component Philosophy

The JS ecosystem split “behavior” and “styling” into two separate concerns at the library level. A headless component library implements all the hard interactive logic — keyboard navigation, ARIA attributes, focus management, accessibility semantics — but renders nothing with any visual style. You provide the markup and classes.

This is a different contract than MudBlazor. The library does not own the look. It owns the behavior.

// Radix UI — headless dialog. No styles, full accessibility out of the box.
import * as Dialog from "@radix-ui/react-dialog";

function OrderDetailDialog({
  order,
  onClose,
}: {
  order: Order;
  onClose: () => void;
}) {
  return (
    <Dialog.Root open={true} onOpenChange={(open) => !open && onClose()}>
      <Dialog.Portal>
        {/* Dialog.Overlay renders a <div> with role="none" — you style it */}
        <Dialog.Overlay className="fixed inset-0 bg-black/50 backdrop-blur-sm" />

        {/* Dialog.Content renders a <div> with role="dialog", aria-modal, focus trap */}
        <Dialog.Content className="fixed top-1/2 left-1/2 -translate-x-1/2 -translate-y-1/2 bg-white rounded-lg p-6 shadow-xl w-full max-w-md">
          <Dialog.Title className="text-lg font-semibold mb-2">
            Order #{order.id}
          </Dialog.Title>
          <Dialog.Description className="text-sm text-gray-500 mb-4">
            Total: ${order.total.toFixed(2)}
          </Dialog.Description>

          <Dialog.Close asChild>
            <button className="absolute top-4 right-4 text-gray-400 hover:text-gray-600">
              &times;
            </button>
          </Dialog.Close>
        </Dialog.Content>
      </Dialog.Portal>
    </Dialog.Root>
  );
}

What Radix gives you without any additional work: focus is trapped inside the dialog when open, Escape closes it, the scroll lock is applied to the body, ARIA attributes (role="dialog", aria-modal="true", aria-labelledby, aria-describedby) are wired automatically, and the Dialog.Close button triggers the onOpenChange callback. Try implementing that from scratch correctly, including all edge cases — it is not a weekend project.

The headless libraries in the React ecosystem:

  • Radix UI — the most comprehensive, highest quality. Covers dialogs, dropdowns, select, combobox, tooltip, popover, accordion, tabs, and many more.
  • Headless UI — from the Tailwind CSS team. Smaller component set, excellent quality, designed for Tailwind integration.
  • Floating UI (formerly Popper) — positioning engine for tooltips, dropdowns, anything that needs to float relative to a trigger.
  • React Aria (Adobe) — the most accessibility-focused option, implements ARIA patterns from the WAI-ARIA specification precisely.

shadcn/ui: Our React Choice

shadcn/ui is not a component library in the traditional sense — you do not install it as a dependency. You copy components into your project’s source code and own them. This is the key distinction.

# Add a component — this copies source files into your project
npx shadcn@latest add button
npx shadcn@latest add dialog
npx shadcn@latest add data-table

Each add command writes TypeScript source files into your components/ui/ directory. The button component is your button. You read it, modify it, extend it. There is no version to upgrade and no API surface to stay compatible with.

// components/ui/button.tsx — this file is now yours after shadcn copies it
import * as React from "react";
import { Slot } from "@radix-ui/react-slot";
import { cva, type VariantProps } from "class-variance-authority";
import { cn } from "@/lib/utils";

const buttonVariants = cva(
  // Base classes applied to every button
  "inline-flex items-center justify-center gap-2 whitespace-nowrap rounded-md text-sm font-medium ring-offset-background transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:pointer-events-none disabled:opacity-50",
  {
    variants: {
      variant: {
        default: "bg-primary text-primary-foreground hover:bg-primary/90",
        destructive: "bg-destructive text-destructive-foreground hover:bg-destructive/90",
        outline: "border border-input bg-background hover:bg-accent hover:text-accent-foreground",
        secondary: "bg-secondary text-secondary-foreground hover:bg-secondary/80",
        ghost: "hover:bg-accent hover:text-accent-foreground",
        link: "text-primary underline-offset-4 hover:underline",
      },
      size: {
        default: "h-10 px-4 py-2",
        sm: "h-9 rounded-md px-3",
        lg: "h-11 rounded-md px-8",
        icon: "h-10 w-10",
      },
    },
    defaultVariants: {
      variant: "default",
      size: "default",
    },
  }
);

export interface ButtonProps
  extends React.ButtonHTMLAttributes<HTMLButtonElement>,
    VariantProps<typeof buttonVariants> {
  asChild?: boolean;
}

const Button = React.forwardRef<HTMLButtonElement, ButtonProps>(
  ({ className, variant, size, asChild = false, ...props }, ref) => {
    const Comp = asChild ? Slot : "button";
    return (
      <Comp
        className={cn(buttonVariants({ variant, size, className }))}
        ref={ref}
        {...props}
      />
    );
  }
);
Button.displayName = "Button";

export { Button, buttonVariants };

The cva function (class-variance-authority) manages variant combinations. Slot from Radix allows the asChild pattern — when asChild is true, the button renders as its child element (useful for wrapping an <a> tag with button styles without nesting). cn is the clsx + tailwind-merge helper from Article 3.7.

Why “Copy/Paste” Is Better Than a Dependency Here

With MudBlazor, when a bug exists in the DataGrid, you wait for a library release. When MudBlazor’s design conflicts with your design system, you write override CSS. When a MudBlazor component lacks a prop you need, you open an issue and wait.

With shadcn/ui, the component source is in your repository. You fix bugs, modify design, add props, and delete what you do not need. The component is a starting point, not a contract.

The trade-off: you are responsible for maintenance. If Radix UI releases a fix for a keyboard navigation bug, you do not get it automatically — you re-run the shadcn CLI to see what changed, or you apply the fix manually. For teams with design systems, this trade-off is clearly worth it.

Vue Options

Vue does not have a direct equivalent to shadcn/ui with the same momentum, but the options are mature:

PrimeVue — the most complete Vue component library. Covers DataTable, TreeTable, Calendar, Charts, and many more. Supports both styled and unstyled (headless-like) modes via “PT” (Pass-Through) API and a Tailwind preset.

// PrimeVue unstyled mode — you control all classes via passthrough
import DataTable from "primevue/datatable";
import Column from "primevue/column";

// In main.ts
app.use(PrimeVue, {
  unstyled: true,
  pt: {
    datatable: {
      root: "relative",
      table: "w-full table-auto border-collapse",
      thead: "border-b border-gray-200",
      tbody: "divide-y divide-gray-100",
    },
    column: {
      headercell: "px-4 py-3 text-left text-sm font-semibold text-gray-600",
      bodycell: "px-4 py-3 text-sm text-gray-900",
    },
  },
});

Vuetify — Material Design implementation for Vue. Opinionated about design (Material Design tokens), comprehensive, widely used. Better choice when the client requires Material Design or when the team is already familiar with it.

Naive UI — TypeScript-first, theme-aware, good component quality. The design is more neutral than Vuetify and more customizable.

Radix Vue / Reka UI — headless components for Vue, similar philosophy to Radix UI for React. Reka UI is the actively maintained successor to Radix Vue.

There is no single Vue equivalent of shadcn/ui, but the combination of Reka UI (headless behavior) + Tailwind (styling) + your own component layer achieves the same architecture.

Accessibility: What Headless Libraries Give You for Free

This is worth stating explicitly because it is something .NET engineers underestimate when considering whether to build components from scratch.

Building a fully accessible custom <Select> component requires:

  • combobox role on the input, listbox role on the dropdown
  • aria-expanded, aria-haspopup, aria-controls, aria-activedescendant attributes updated dynamically
  • Keyboard navigation: ArrowDown/ArrowUp for option traversal, Enter/Space to select, Escape to close, Home/End for first/last option, typeahead search by first letter
  • Focus management: focus returns to trigger after close
  • Mobile screen reader compatibility (different from desktop keyboard)
  • Touch support that does not conflict with native behavior

Radix UI and Headless UI implement all of this. When you use their Select or Combobox component, you inherit years of work on accessibility edge cases across browsers and assistive technologies. When you build a <div> dropdown from scratch, you own all of that.

The cost of ignoring accessibility is not just ethical — WCAG compliance is a legal requirement in many jurisdictions. Using headless libraries is the most practical path to compliance without a dedicated accessibility engineer on every team.

When to Build Custom vs. Use a Library

Use a library component when:

  • The component type is in the library (dialog, dropdown, select, tooltip, tabs, accordion)
  • Your needs fit the component’s documented behavior
  • The time to learn the library API is less than the time to build from scratch

Build custom when:

  • The component type is not available anywhere (a specific data visualization, a domain-specific widget)
  • The library component’s behavior fundamentally conflicts with your requirements (not just styling)
  • The library adds dependencies that significantly increase bundle size for a small gain

Do not build custom when:

  • You want to because it seems simpler (it is not — accessibility edge cases are the complexity you are not seeing)
  • The library’s default design does not match yours (change the design; do not rebuild the behavior)
  • You have not tried to style the library component (Radix UI renders semantically correct HTML; Tailwind styling is straightforward)

Extending Library Components

The pattern for building on top of a library component is to wrap it with your own component that applies your design system:

// Wrapping shadcn/ui Badge to add domain-specific variants
import { Badge } from "@/components/ui/badge";
import { cn } from "@/lib/utils";

type OrderStatus = "pending" | "processing" | "fulfilled" | "cancelled";

const statusConfig: Record<OrderStatus, { label: string; className: string }> = {
  pending: { label: "Pending", className: "bg-amber-100 text-amber-800 border-amber-200" },
  processing: { label: "Processing", className: "bg-blue-100 text-blue-800 border-blue-200" },
  fulfilled: { label: "Fulfilled", className: "bg-emerald-100 text-emerald-800 border-emerald-200" },
  cancelled: { label: "Cancelled", className: "bg-red-100 text-red-800 border-red-200" },
};

function OrderStatusBadge({ status }: { status: OrderStatus }) {
  const { label, className } = statusConfig[status];
  return (
    <Badge variant="outline" className={cn("font-medium", className)}>
      {label}
    </Badge>
  );
}

// The base Badge handles sizing, border-radius, and font-size.
// OrderStatusBadge handles domain semantics (which status maps to which color).
// Neither component knows about the other's concerns.
// Composing Radix primitives into a higher-level component
import * as DropdownMenu from "@radix-ui/react-dropdown-menu";
import { cn } from "@/lib/utils";

interface Action {
  label: string;
  icon?: React.ReactNode;
  onClick: () => void;
  variant?: "default" | "destructive";
  disabled?: boolean;
}

function ActionMenu({
  trigger,
  actions,
}: {
  trigger: React.ReactNode;
  actions: Action[];
}) {
  return (
    <DropdownMenu.Root>
      <DropdownMenu.Trigger asChild>{trigger}</DropdownMenu.Trigger>

      <DropdownMenu.Portal>
        <DropdownMenu.Content
          className="min-w-40 bg-white rounded-md shadow-lg border border-gray-200 py-1 z-50"
          sideOffset={4}
        >
          {actions.map((action, index) => (
            <DropdownMenu.Item
              key={index}
              disabled={action.disabled}
              onClick={action.onClick}
              className={cn(
                "flex items-center gap-2 px-3 py-2 text-sm cursor-default select-none outline-none",
                "hover:bg-gray-50 focus:bg-gray-50",
                "data-[disabled]:opacity-50 data-[disabled]:pointer-events-none",
                action.variant === "destructive" && "text-red-600 hover:bg-red-50 focus:bg-red-50"
              )}
            >
              {action.icon}
              {action.label}
            </DropdownMenu.Item>
          ))}
        </DropdownMenu.Content>
      </DropdownMenu.Portal>
    </DropdownMenu.Root>
  );
}

The data-[disabled] and data-[state] selectors are Radix’s way of exposing component state to CSS. Radix adds data-state="open" / data-state="closed", data-highlighted, data-disabled, and so on to its rendered elements. Tailwind’s arbitrary variant syntax (data-[disabled]:opacity-50) selects on these attributes.

Key Differences

.NET Library PatternJS/TS EquivalentNotes
Install NuGet package, get styled componentsInstall + use (Material UI, Vuetify)Traditional model; library owns the look
CSS override via specificityTailwind classes on the rendered elementHeadless: no library styles to override
Theme customization via SCSS variablesDesign tokens in tailwind.config.tsUpstream from components
Telerik/DevExpress DataGridTanStack Table + headless componentBehavior library; you build the markup
MudBlazor <MudDialog>Radix <Dialog.Root> + your stylesSame behavior; you control the design
Waiting for library bug fixEdit the copied source file (shadcn)You own the code
Component parameter APIProps API (same concept, same constraints)
RenderFragment for slotschildren prop / named slotsDifferent syntax, same idea
@bind for two-way bindingControlled component patternvalue + onChange in React

Gotchas for .NET Engineers

Gotcha 1: Radix Components Are Compound — You Cannot Use Just the Root

Radix components are composed of multiple primitives that must be used together in a specific structure. Unlike <MudDialog Open="@_open">, you cannot just render the root element and expect behavior to work.

// BROKEN — Dialog.Root alone renders nothing and triggers nothing
<Dialog.Root open={isOpen} onOpenChange={setIsOpen}>
  <div>Some content</div>
</Dialog.Root>

// CORRECT — the full primitive structure is required
<Dialog.Root open={isOpen} onOpenChange={setIsOpen}>
  <Dialog.Portal>          {/* Renders into document.body via React portal */}
    <Dialog.Overlay />     {/* The backdrop, wired to close on click */}
    <Dialog.Content>       {/* The dialog container with ARIA and focus trap */}
      <Dialog.Title />     {/* Required for accessibility (aria-labelledby) */}
      <Dialog.Description /> {/* Optional but recommended */}
      <Dialog.Close />     {/* The close trigger */}
    </Dialog.Content>
  </Dialog.Portal>
</Dialog.Root>

Read the Radix documentation for each component before using it. The compound structure is documented and intentional. Dialog.Portal renders the dialog outside the component tree’s DOM position (into <body>) to avoid stacking context issues — the same problem that makes z-index on modals unreliable in nested components.

Gotcha 2: shadcn/ui Requires a Specific Project Structure and Dependencies

shadcn/ui is not a drop-in library. It requires:

  • Tailwind CSS configured and working
  • The cn utility (clsx + tailwind-merge) at @/lib/utils
  • Specific Tailwind CSS variables for design tokens (--background, --foreground, --primary, etc.) in your globals.css
  • Path aliases (@/) configured in tsconfig.json and your bundler

Running npx shadcn@latest init sets all of this up. Running npx shadcn@latest add button without init first, or without Tailwind configured, will produce a component that does not work and errors that are not obvious.

# Correct initialization sequence for a new Vite + React project
npm create vite@latest my-app -- --template react-ts
cd my-app
npm install
npx tailwindcss init -p        # or use the Vite Tailwind plugin
npx shadcn@latest init         # sets up globals.css, lib/utils, tsconfig paths
npx shadcn@latest add button   # now this works

Gotcha 3: “Unstyled” in PrimeVue and Similar Libraries Is Not the Default

PrimeVue ships with a default styled theme. The unstyled mode (Pass-Through) is opt-in at the app.use(PrimeVue, { unstyled: true }) level. If you configure it styled and then try to add Tailwind classes, the library’s own CSS specificity will often win.

When using PrimeVue with Tailwind, make the decision up front: either use the PrimeVue theme and apply Tailwind only to non-PrimeVue areas, or use unstyled: true from the start and style everything with Tailwind via Pass-Through. Mixing both approaches mid-project is painful.

Gotcha 4: forwardRef Is Required for Library Integration in React

Many component libraries, and the asChild pattern in Radix, require that your custom components forward refs. A .NET engineer wrapping a library component without forwardRef will encounter errors when the library tries to attach a ref to manage focus or positioning.

// BROKEN — Tooltip cannot attach its positioning ref to this component
function MyButton({ children, ...props }: ButtonProps) {
  return <button {...props}>{children}</button>;
}

<Tooltip.Trigger asChild>
  <MyButton>Hover me</MyButton> {/* Radix cannot get a ref to the DOM node */}
</Tooltip.Trigger>

// CORRECT — forwardRef passes the ref through to the DOM element
const MyButton = React.forwardRef<HTMLButtonElement, ButtonProps>(
  ({ children, ...props }, ref) => (
    <button ref={ref} {...props}>{children}</button>
  )
);
MyButton.displayName = "MyButton";

All shadcn/ui components already use forwardRef. When building your own wrapper components that will be placed inside Radix primitives with asChild, always add forwardRef.

Hands-On Exercise

Build a data table with sorting, filtering, and pagination using TanStack Table (the headless table behavior library) styled with Tailwind.

TanStack Table is the standard for complex table needs in the React ecosystem — it handles sorting, filtering, pagination, row selection, column visibility, virtualization, and more, without any rendering assumptions.

Setup:

npm install @tanstack/react-table

Data:

// types.ts
export interface Order {
  id: number;
  customer: string;
  total: number;
  status: "pending" | "processing" | "fulfilled" | "cancelled";
  createdAt: string;
}

export const orders: Order[] = [
  { id: 1, customer: "Acme Corp", total: 1240.00, status: "fulfilled", createdAt: "2026-02-01" },
  { id: 2, customer: "Globex Inc", total: 580.50, status: "processing", createdAt: "2026-02-05" },
  { id: 3, customer: "Initech", total: 3200.00, status: "pending", createdAt: "2026-02-10" },
  { id: 4, customer: "Acme Corp", total: 750.00, status: "cancelled", createdAt: "2026-02-12" },
  { id: 5, customer: "Umbrella Ltd", total: 94.99, status: "fulfilled", createdAt: "2026-02-14" },
  { id: 6, customer: "Initech", total: 450.00, status: "fulfilled", createdAt: "2026-02-15" },
  { id: 7, customer: "Globex Inc", total: 2100.00, status: "processing", createdAt: "2026-02-17" },
  { id: 8, customer: "Umbrella Ltd", total: 88.00, status: "pending", createdAt: "2026-02-18" },
];

Part 1 — Basic sortable table:

import {
  createColumnHelper,
  flexRender,
  getCoreRowModel,
  getSortedRowModel,
  type SortingState,
  useReactTable,
} from "@tanstack/react-table";
import { useState } from "react";
import { Order, orders } from "./types";

const columnHelper = createColumnHelper<Order>();

const columns = [
  columnHelper.accessor("id", {
    header: "Order #",
    cell: (info) => `#${info.getValue()}`,
  }),
  columnHelper.accessor("customer", {
    header: "Customer",
  }),
  columnHelper.accessor("total", {
    header: "Total",
    cell: (info) => `$${info.getValue().toFixed(2)}`,
  }),
  columnHelper.accessor("status", {
    header: "Status",
    // TODO: render an OrderStatusBadge here instead of plain text
    cell: (info) => info.getValue(),
  }),
  columnHelper.accessor("createdAt", {
    header: "Date",
  }),
];

export function OrderTable() {
  const [sorting, setSorting] = useState<SortingState>([]);

  const table = useReactTable({
    data: orders,
    columns,
    state: { sorting },
    onSortingChange: setSorting,
    getCoreRowModel: getCoreRowModel(),
    getSortedRowModel: getSortedRowModel(),
  });

  return (
    <div className="overflow-x-auto rounded-lg border border-gray-200">
      <table className="w-full text-sm text-left">
        <thead className="bg-gray-50 border-b border-gray-200">
          {table.getHeaderGroups().map((headerGroup) => (
            <tr key={headerGroup.id}>
              {headerGroup.headers.map((header) => (
                <th
                  key={header.id}
                  className="px-4 py-3 font-semibold text-gray-600 cursor-pointer select-none hover:bg-gray-100"
                  onClick={header.column.getToggleSortingHandler()}
                >
                  <div className="flex items-center gap-1">
                    {flexRender(header.column.columnDef.header, header.getContext())}
                    {/* Sort indicator */}
                    {header.column.getIsSorted() === "asc" && " ↑"}
                    {header.column.getIsSorted() === "desc" && " ↓"}
                    {!header.column.getIsSorted() && header.column.getCanSort() && (
                      <span className="text-gray-300">↕</span>
                    )}
                  </div>
                </th>
              ))}
            </tr>
          ))}
        </thead>
        <tbody className="divide-y divide-gray-100">
          {table.getRowModel().rows.map((row) => (
            <tr key={row.id} className="hover:bg-gray-50 transition-colors">
              {row.getVisibleCells().map((cell) => (
                <td key={cell.id} className="px-4 py-3 text-gray-900">
                  {flexRender(cell.column.columnDef.cell, cell.getContext())}
                </td>
              ))}
            </tr>
          ))}
        </tbody>
      </table>
    </div>
  );
}

Part 2 — Add global filter (search):

import {
  // ... previous imports
  getFilteredRowModel,
  type ColumnFiltersState,
} from "@tanstack/react-table";

// TODO: Add to the component:
// 1. const [globalFilter, setGlobalFilter] = useState("");
// 2. Add to useReactTable: { state: { sorting, globalFilter }, onGlobalFilterChange: setGlobalFilter, getFilteredRowModel: getFilteredRowModel() }
// 3. Add a search input above the table:
//    <input
//      value={globalFilter}
//      onChange={(e) => setGlobalFilter(e.target.value)}
//      placeholder="Search orders..."
//      className="px-3 py-2 border border-gray-300 rounded-md text-sm w-64 focus:outline-none focus:ring-2 focus:ring-blue-500"
//    />

// TODO: Add status filter (a <select> to filter by status column only):
// columnHelper.accessor("status", {
//   ...
//   filterFn: "equals",  // exact match instead of contains
// }),
// const [columnFilters, setColumnFilters] = useState<ColumnFiltersState>([]);
// table.getColumn("status")?.setFilterValue(selectedStatus || undefined)

Part 3 — Add pagination:

import {
  // ... previous imports
  getPaginationRowModel,
  type PaginationState,
} from "@tanstack/react-table";

// TODO: Add pagination to the table:
// 1. const [pagination, setPagination] = useState<PaginationState>({ pageIndex: 0, pageSize: 5 });
// 2. Add to useReactTable: { state: { ..., pagination }, onPaginationChange: setPagination, getPaginationRowModel: getPaginationRowModel() }
// 3. Add controls below the table:
//    - "Previous" button: table.previousPage(), disabled when !table.getCanPreviousPage()
//    - "Next" button: table.nextPage(), disabled when !table.getCanNextPage()
//    - Page indicator: "Page {table.getState().pagination.pageIndex + 1} of {table.getPageCount()}"
//    - Rows per page select: table.setPageSize(Number(value))

Complete working solution (all three parts combined):

import {
  createColumnHelper,
  flexRender,
  getCoreRowModel,
  getFilteredRowModel,
  getPaginationRowModel,
  getSortedRowModel,
  type ColumnFiltersState,
  type PaginationState,
  type SortingState,
  useReactTable,
} from "@tanstack/react-table";
import { useState } from "react";
import { Order, orders } from "./types";

const columnHelper = createColumnHelper<Order>();

const statusColors: Record<Order["status"], string> = {
  pending: "bg-amber-100 text-amber-800",
  processing: "bg-blue-100 text-blue-800",
  fulfilled: "bg-emerald-100 text-emerald-800",
  cancelled: "bg-red-100 text-red-800",
};

const columns = [
  columnHelper.accessor("id", {
    header: "Order #",
    cell: (info) => <span className="font-mono">#{info.getValue()}</span>,
  }),
  columnHelper.accessor("customer", {
    header: "Customer",
  }),
  columnHelper.accessor("total", {
    header: "Total",
    cell: (info) => (
      <span className="font-medium">${info.getValue().toFixed(2)}</span>
    ),
  }),
  columnHelper.accessor("status", {
    header: "Status",
    filterFn: "equals",
    cell: (info) => (
      <span
        className={`inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-semibold ${statusColors[info.getValue()]}`}
      >
        {info.getValue()}
      </span>
    ),
  }),
  columnHelper.accessor("createdAt", {
    header: "Date",
  }),
];

export function OrderTable() {
  const [sorting, setSorting] = useState<SortingState>([]);
  const [globalFilter, setGlobalFilter] = useState("");
  const [columnFilters, setColumnFilters] = useState<ColumnFiltersState>([]);
  const [pagination, setPagination] = useState<PaginationState>({
    pageIndex: 0,
    pageSize: 5,
  });

  const table = useReactTable({
    data: orders,
    columns,
    state: { sorting, globalFilter, columnFilters, pagination },
    onSortingChange: setSorting,
    onGlobalFilterChange: setGlobalFilter,
    onColumnFiltersChange: setColumnFilters,
    onPaginationChange: setPagination,
    getCoreRowModel: getCoreRowModel(),
    getSortedRowModel: getSortedRowModel(),
    getFilteredRowModel: getFilteredRowModel(),
    getPaginationRowModel: getPaginationRowModel(),
  });

  const statusOptions: Array<Order["status"] | ""> = [
    "",
    "pending",
    "processing",
    "fulfilled",
    "cancelled",
  ];

  return (
    <div className="space-y-4">
      {/* Toolbar */}
      <div className="flex items-center gap-3">
        <input
          value={globalFilter}
          onChange={(e) => setGlobalFilter(e.target.value)}
          placeholder="Search orders..."
          className="px-3 py-2 border border-gray-300 rounded-md text-sm w-64 focus:outline-none focus:ring-2 focus:ring-blue-500"
        />
        <select
          value={
            (table.getColumn("status")?.getFilterValue() as string) ?? ""
          }
          onChange={(e) =>
            table.getColumn("status")?.setFilterValue(e.target.value || undefined)
          }
          className="px-3 py-2 border border-gray-300 rounded-md text-sm focus:outline-none focus:ring-2 focus:ring-blue-500"
        >
          {statusOptions.map((s) => (
            <option key={s} value={s}>
              {s === "" ? "All statuses" : s}
            </option>
          ))}
        </select>
      </div>

      {/* Table */}
      <div className="overflow-x-auto rounded-lg border border-gray-200">
        <table className="w-full text-sm text-left">
          <thead className="bg-gray-50 border-b border-gray-200">
            {table.getHeaderGroups().map((headerGroup) => (
              <tr key={headerGroup.id}>
                {headerGroup.headers.map((header) => (
                  <th
                    key={header.id}
                    className="px-4 py-3 font-semibold text-gray-600 cursor-pointer select-none hover:bg-gray-100"
                    onClick={header.column.getToggleSortingHandler()}
                  >
                    <div className="flex items-center gap-1">
                      {flexRender(header.column.columnDef.header, header.getContext())}
                      {header.column.getIsSorted() === "asc" && " ↑"}
                      {header.column.getIsSorted() === "desc" && " ↓"}
                      {!header.column.getIsSorted() && header.column.getCanSort() && (
                        <span className="text-gray-300">↕</span>
                      )}
                    </div>
                  </th>
                ))}
              </tr>
            ))}
          </thead>
          <tbody className="divide-y divide-gray-100">
            {table.getRowModel().rows.length === 0 ? (
              <tr>
                <td colSpan={columns.length} className="px-4 py-8 text-center text-gray-400">
                  No orders match the current filters.
                </td>
              </tr>
            ) : (
              table.getRowModel().rows.map((row) => (
                <tr key={row.id} className="hover:bg-gray-50 transition-colors">
                  {row.getVisibleCells().map((cell) => (
                    <td key={cell.id} className="px-4 py-3 text-gray-900">
                      {flexRender(cell.column.columnDef.cell, cell.getContext())}
                    </td>
                  ))}
                </tr>
              ))
            )}
          </tbody>
        </table>
      </div>

      {/* Pagination */}
      <div className="flex items-center justify-between text-sm text-gray-600">
        <span>
          {table.getFilteredRowModel().rows.length} total order
          {table.getFilteredRowModel().rows.length !== 1 ? "s" : ""}
        </span>
        <div className="flex items-center gap-2">
          <select
            value={table.getState().pagination.pageSize}
            onChange={(e) => table.setPageSize(Number(e.target.value))}
            className="px-2 py-1 border border-gray-300 rounded text-sm"
          >
            {[5, 10, 20].map((size) => (
              <option key={size} value={size}>
                {size} per page
              </option>
            ))}
          </select>
          <span>
            Page {table.getState().pagination.pageIndex + 1} of{" "}
            {table.getPageCount()}
          </span>
          <button
            onClick={() => table.previousPage()}
            disabled={!table.getCanPreviousPage()}
            className="px-3 py-1 border border-gray-300 rounded text-sm hover:bg-gray-50 disabled:opacity-40 disabled:cursor-not-allowed"
          >
            Previous
          </button>
          <button
            onClick={() => table.nextPage()}
            disabled={!table.getCanNextPage()}
            className="px-3 py-1 border border-gray-300 rounded text-sm hover:bg-gray-50 disabled:opacity-40 disabled:cursor-not-allowed"
          >
            Next
          </button>
        </div>
      </div>
    </div>
  );
}

Quick Reference

LibraryTypeFrameworkUse When
Radix UIHeadlessReactNeed accessible primitives; building own design system
Headless UIHeadlessReact, VueAlready using Tailwind; smaller component set than Radix
shadcn/uiCopy-paste (Radix + Tailwind)ReactOur React recommendation; want to own the code
TanStack TableHeadless (table only)React, Vue, SvelteComplex table requirements
Material UIStyled (Material Design)ReactClient requires Material Design; large team familiar with it
PrimeVueStyled or unstyledVueComplex components (DataTable, Calendar, TreeTable) in Vue
Reka UIHeadlessVueVue equivalent of Radix
VuetifyStyled (Material Design)VueMaterial Design requirement in Vue
Naive UIStyledVueTypeScript-first Vue component library
React AriaHeadless (accessibility-first)ReactStrictest accessibility requirements
DecisionRecommendation
New React project, needs design systemshadcn/ui + Tailwind
New Vue project, needs design systemReka UI + Tailwind or PrimeVue unstyled
Existing project with styled libraryStay with it; do not mix headless and styled
Complex data tableTanStack Table (all frameworks)
Accessible dropdown/dialog/popoverRadix UI (React) or Reka UI (Vue)
Already have a design system in FigmaHeadless library + implement your own styles
Rapid prototype, design does not matterMaterial UI or Vuetify (fastest to functional)

Further Reading

  • Radix UI Documentation — Component-by-component reference. Read the “Accessibility” section for each component to understand what behavior you get for free.
  • shadcn/ui Documentation — Setup guide, component catalog, and theming reference. Start with “Installation” and “Theming”.
  • TanStack Table Documentation — The guide section “Column Definitions”, “Sorting”, “Filtering”, and “Pagination” covers 95% of real-world needs.
  • WAI-ARIA Authoring Practices Guide — The W3C specification for accessible widget patterns. Radix and Headless UI implement these. Reading the Dialog and Combobox patterns explains why the headless libraries are structured the way they are.
  • Reka UI Documentation — The actively maintained headless component library for Vue.

Forms and Validation: DataAnnotations to React Hook Form and VeeValidate

For .NET engineers who know: Model binding, DataAnnotations, ModelState, IValidatableObject, and Razor form tag helpers You’ll learn: How React Hook Form and VeeValidate replace the DataAnnotations pipeline — and how sharing a single Zod schema between your API and your form eliminates duplicate validation logic Time: 15-20 minutes

The .NET Way (What You Already Know)

In ASP.NET, forms are backed by a model. The model carries its validation rules via DataAnnotations attributes. The framework binds incoming form data to the model, runs the validation pipeline, and populates ModelState. Your controller checks ModelState.IsValid and either proceeds or returns the errors to the view. The entire cycle is automatic once you wire up the model.

// The model — validation lives in attributes
public class CreateOrderRequest
{
    [Required(ErrorMessage = "Customer name is required")]
    [MaxLength(100)]
    public string CustomerName { get; set; } = string.Empty;

    [Required]
    [Range(1, 10000, ErrorMessage = "Quantity must be between 1 and 10,000")]
    public int Quantity { get; set; }

    [Required]
    [EmailAddress]
    public string ContactEmail { get; set; } = string.Empty;

    [CreditCard]
    public string? CardNumber { get; set; }
}

// The controller — ModelState checked automatically by [ApiController]
[ApiController]
[Route("api/orders")]
public class OrdersController : ControllerBase
{
    [HttpPost]
    public async Task<IActionResult> Create([FromBody] CreateOrderRequest request)
    {
        // [ApiController] automatically returns 400 if ModelState.IsValid is false
        // You never check ModelState.IsValid here — it is handled by the framework
        var order = await _orderService.CreateAsync(request);
        return CreatedAtAction(nameof(Get), new { id = order.Id }, order);
    }
}

// The Razor view
<form asp-action="Create">
    <input asp-for="CustomerName" class="form-control" />
    <span asp-validation-for="CustomerName" class="text-danger"></span>

    <input asp-for="Quantity" type="number" />
    <span asp-validation-for="Quantity" class="text-danger"></span>

    <button type="submit">Place Order</button>
</form>

The appeal of this model is the single source of truth: the C# class drives everything — binding, validation, Swagger types, and form display. When you add [MaxLength(50)] to a property, it affects server validation, and if you use client-side validation libraries, it can affect that too.

The problem in the JS world: there is no equivalent of “attribute-driven validation on a class” by default. The form library, the validation library, and your TypeScript types are three separate concerns that you have to connect yourself — unless you use Zod, which does exactly that.

The React Way

The Libraries Involved

Three libraries work together:

  • React Hook Form — manages form state, tracks which fields have been touched, handles submission
  • Zod — defines the schema: types and validation rules in one place
  • @hookform/resolvers — the bridge that connects a Zod schema to React Hook Form’s validation pipeline

Install them:

npm install react-hook-form zod @hookform/resolvers

Defining the Schema with Zod

In the .NET model, DataAnnotations are attributes on properties. In Zod, you describe an object’s shape and validation rules in code:

// schemas/order.schema.ts
import { z } from "zod";

export const createOrderSchema = z.object({
  customerName: z
    .string()
    .min(1, "Customer name is required")
    .max(100, "Customer name must be 100 characters or fewer"),

  quantity: z
    .number({ invalid_type_error: "Quantity must be a number" })
    .int("Quantity must be a whole number")
    .min(1, "Quantity must be at least 1")
    .max(10000, "Quantity cannot exceed 10,000"),

  contactEmail: z
    .string()
    .min(1, "Contact email is required")
    .email("Enter a valid email address"),

  cardNumber: z
    .string()
    .regex(/^\d{16}$/, "Card number must be 16 digits")
    .optional(),
});

// Derive the TypeScript type from the schema — one definition, two uses
export type CreateOrderFormValues = z.infer<typeof createOrderSchema>;

z.infer<typeof createOrderSchema> generates a TypeScript type from the schema. This is the equivalent of your C# model class — except it is inferred from the same object that defines the validation rules. You do not write the type and the validation separately.

Basic Form with React Hook Form

// components/CreateOrderForm.tsx
import { useForm } from "react-hook-form";
import { zodResolver } from "@hookform/resolvers/zod";
import { createOrderSchema, type CreateOrderFormValues } from "../schemas/order.schema";

export function CreateOrderForm() {
  const {
    register,      // connects <input> to the form
    handleSubmit,  // wraps your submit handler, prevents default, runs validation
    formState: { errors, isSubmitting, isValid, touchedFields },
    reset,         // clears the form
  } = useForm<CreateOrderFormValues>({
    resolver: zodResolver(createOrderSchema),
    defaultValues: {
      customerName: "",
      quantity: 1,
      contactEmail: "",
    },
    mode: "onBlur", // validate when user leaves a field (like touching a field away)
  });

  const onSubmit = async (data: CreateOrderFormValues) => {
    // data is fully typed and validated — TypeScript knows all fields
    await orderApi.create(data);
    reset();
  };

  return (
    <form onSubmit={handleSubmit(onSubmit)} noValidate>
      {/* customerName field */}
      <div>
        <label htmlFor="customerName">Customer Name</label>
        <input
          id="customerName"
          {...register("customerName")}
          aria-describedby={errors.customerName ? "customerName-error" : undefined}
          aria-invalid={!!errors.customerName}
        />
        {errors.customerName && (
          <span id="customerName-error" role="alert">
            {errors.customerName.message}
          </span>
        )}
      </div>

      {/* quantity field */}
      <div>
        <label htmlFor="quantity">Quantity</label>
        <input
          id="quantity"
          type="number"
          {...register("quantity", { valueAsNumber: true })}
          aria-invalid={!!errors.quantity}
        />
        {errors.quantity && (
          <span role="alert">{errors.quantity.message}</span>
        )}
      </div>

      {/* contactEmail field */}
      <div>
        <label htmlFor="contactEmail">Email</label>
        <input
          id="contactEmail"
          type="email"
          {...register("contactEmail")}
          aria-invalid={!!errors.contactEmail}
        />
        {errors.contactEmail && (
          <span role="alert">{errors.contactEmail.message}</span>
        )}
      </div>

      <button type="submit" disabled={isSubmitting}>
        {isSubmitting ? "Placing Order..." : "Place Order"}
      </button>
    </form>
  );
}

The register("fieldName") call returns { name, ref, onChange, onBlur } — the four props React Hook Form needs to track that input. The spread {...register("customerName")} attaches all four in one line.

handleSubmit(onSubmit) does three things: prevents the default form submission, runs all validations, and only calls your onSubmit function if everything passes. Your submit handler receives typed, validated data.

Controlled Inputs vs. Uncontrolled Inputs

This is the key architectural choice in React Hook Form. Understanding it maps directly to something you know from WinForms or Blazor.

Uncontrolled inputs — React Hook Form’s default. The DOM holds the value. React Hook Form reads from the DOM when it needs to validate or submit. There is no React state update on every keystroke. This is fast and the right choice for most form fields.

// Uncontrolled — the DOM holds the value
<input {...register("customerName")} />

Controlled inputs — React state drives the value. Every keystroke triggers a re-render. Required when your UI must react to the value in real time (character counter, live preview, conditional field visibility based on typed content).

// Controlled — you manage state, React Hook Form watches it
const { control } = useForm<FormValues>();

<Controller
  name="customerName"
  control={control}
  render={({ field, fieldState }) => (
    <CustomInput
      {...field}
      error={fieldState.error?.message}
      characterCount={field.value.length}
    />
  )}
/>

Use Controller when integrating third-party components (date pickers, rich text editors, custom select components) that manage their own internal state and expose a value/onChange interface.

ApproachWhen to useRe-renders
register() (uncontrolled)Standard inputs, file inputs, most casesOnly on validation / submit
Controller (controlled)Custom components, third-party UI libs, value-dependent UIEvery keystroke

Sharing Zod Schemas Between API and Form

This is the insight that eliminates most duplicate validation code. The same Zod schema that validates form input can also validate API request bodies on the server (with NestJS + zod-validation-pipe, or Express middleware).

// shared/schemas/order.schema.ts
// This file can live in a shared package imported by both frontend and backend
import { z } from "zod";

export const createOrderSchema = z.object({
  customerName: z.string().min(1).max(100),
  quantity: z.number().int().min(1).max(10000),
  contactEmail: z.string().email(),
});

export type CreateOrderDto = z.infer<typeof createOrderSchema>;
// Backend (NestJS) — same schema validates incoming API requests
// orders.controller.ts
import { createOrderSchema, CreateOrderDto } from "@myapp/shared/schemas";
import { ZodValidationPipe } from "nestjs-zod";

@Post()
@UsePipes(new ZodValidationPipe(createOrderSchema))
async create(@Body() dto: CreateOrderDto) {
  return this.ordersService.create(dto);
}
// Frontend — same schema drives form validation
// CreateOrderForm.tsx
import { createOrderSchema, type CreateOrderDto } from "@myapp/shared/schemas";
import { zodResolver } from "@hookform/resolvers/zod";

const { register, handleSubmit } = useForm<CreateOrderDto>({
  resolver: zodResolver(createOrderSchema),
});

This is the JS equivalent of having your DataAnnotations model shared between an ASP.NET controller and a Blazor form component. In a monorepo (Article 1.4), you publish the schema from a packages/shared directory and import it in both apps/api and apps/web.

File Uploads

File inputs do not work with Zod’s type inference directly — the browser File object is not a JSON-serializable type. Use register() and access the file from FileList:

const fileSchema = z.object({
  name: z.string().min(1),
  attachment: z
    .custom<FileList>()
    .refine((files) => files.length > 0, "A file is required")
    .refine(
      (files) => files[0]?.size <= 5 * 1024 * 1024,
      "File must be smaller than 5MB"
    )
    .refine(
      (files) => ["image/jpeg", "image/png", "application/pdf"].includes(files[0]?.type),
      "Only JPG, PNG, or PDF files are accepted"
    ),
});

type FileFormValues = z.infer<typeof fileSchema>;

function FileUploadForm() {
  const { register, handleSubmit, formState: { errors } } = useForm<FileFormValues>({
    resolver: zodResolver(fileSchema),
  });

  const onSubmit = async (data: FileFormValues) => {
    const formData = new FormData();
    formData.append("name", data.name);
    formData.append("attachment", data.attachment[0]);

    await fetch("/api/upload", { method: "POST", body: formData });
    // Do NOT set Content-Type header — the browser sets multipart/form-data with boundary
  };

  return (
    <form onSubmit={handleSubmit(onSubmit)}>
      <input {...register("name")} />
      <input type="file" {...register("attachment")} accept=".jpg,.png,.pdf" />
      {errors.attachment && <span role="alert">{errors.attachment.message}</span>}
      <button type="submit">Upload</button>
    </form>
  );
}

Multi-Step Forms

Multi-step forms (wizards) are common and React Hook Form handles them well. The pattern: one useForm instance at the top level, each step renders a subset of fields, validation runs per-step with a partial schema:

// schemas/registration.schema.ts
export const stepOneSchema = z.object({
  firstName: z.string().min(1, "First name is required"),
  lastName: z.string().min(1, "Last name is required"),
  email: z.string().email("Enter a valid email"),
});

export const stepTwoSchema = z.object({
  plan: z.enum(["starter", "pro", "enterprise"], {
    errorMap: () => ({ message: "Select a plan" }),
  }),
  billingCycle: z.enum(["monthly", "annual"]),
});

// Full schema for final submission
export const registrationSchema = stepOneSchema.merge(stepTwoSchema);
export type RegistrationFormValues = z.infer<typeof registrationSchema>;
// components/RegistrationWizard.tsx
import { useState } from "react";
import { useForm } from "react-hook-form";
import { zodResolver } from "@hookform/resolvers/zod";
import {
  registrationSchema,
  stepOneSchema,
  stepTwoSchema,
  type RegistrationFormValues,
} from "../schemas/registration.schema";

export function RegistrationWizard() {
  const [step, setStep] = useState(1);

  const form = useForm<RegistrationFormValues>({
    resolver: zodResolver(registrationSchema),
    defaultValues: {
      firstName: "",
      lastName: "",
      email: "",
      plan: "starter",
      billingCycle: "monthly",
    },
    mode: "onBlur",
  });

  const advanceToStepTwo = async () => {
    // Trigger validation only on step one's fields
    const valid = await form.trigger(["firstName", "lastName", "email"]);
    if (valid) setStep(2);
  };

  const onSubmit = async (data: RegistrationFormValues) => {
    await registrationApi.create(data);
  };

  return (
    <form onSubmit={form.handleSubmit(onSubmit)}>
      {step === 1 && (
        <>
          <input {...form.register("firstName")} placeholder="First name" />
          {form.formState.errors.firstName && (
            <span role="alert">{form.formState.errors.firstName.message}</span>
          )}
          <input {...form.register("lastName")} placeholder="Last name" />
          <input {...form.register("email")} type="email" placeholder="Email" />
          <button type="button" onClick={advanceToStepTwo}>Next</button>
        </>
      )}
      {step === 2 && (
        <>
          <select {...form.register("plan")}>
            <option value="starter">Starter</option>
            <option value="pro">Pro</option>
            <option value="enterprise">Enterprise</option>
          </select>
          <button type="button" onClick={() => setStep(1)}>Back</button>
          <button type="submit">Complete Registration</button>
        </>
      )}
    </form>
  );
}

The Vue Way: VeeValidate

VeeValidate is the Vue equivalent of React Hook Form. It integrates with Zod through the same @vee-validate/zod adapter. The same schema can back both a React and a Vue form.

npm install vee-validate @vee-validate/zod zod
// components/CreateOrderForm.vue
<script setup lang="ts">
import { useForm, useField } from "vee-validate";
import { toTypedSchema } from "@vee-validate/zod";
import { createOrderSchema, type CreateOrderFormValues } from "../schemas/order.schema";

const { handleSubmit, errors, isSubmitting, meta } = useForm<CreateOrderFormValues>({
  validationSchema: toTypedSchema(createOrderSchema),
  initialValues: {
    customerName: "",
    quantity: 1,
    contactEmail: "",
  },
});

// useField binds a single field — returns value, error messages, and blur handler
const { value: customerName, errorMessage: customerNameError } = useField<string>("customerName");
const { value: quantity, errorMessage: quantityError } = useField<number>("quantity");
const { value: contactEmail, errorMessage: contactEmailError } = useField<string>("contactEmail");

const onSubmit = handleSubmit(async (values) => {
  await orderApi.create(values);
});
</script>

<template>
  <form @submit="onSubmit" novalidate>
    <div>
      <label for="customerName">Customer Name</label>
      <input
        id="customerName"
        v-model="customerName"
        :aria-invalid="!!customerNameError"
        :aria-describedby="customerNameError ? 'customerName-error' : undefined"
      />
      <span v-if="customerNameError" id="customerName-error" role="alert">
        {{ customerNameError }}
      </span>
    </div>

    <div>
      <label for="quantity">Quantity</label>
      <input id="quantity" v-model.number="quantity" type="number" />
      <span v-if="quantityError" role="alert">{{ quantityError }}</span>
    </div>

    <div>
      <label for="contactEmail">Email</label>
      <input id="contactEmail" v-model="contactEmail" type="email" />
      <span v-if="contactEmailError" role="alert">{{ contactEmailError }}</span>
    </div>

    <button type="submit" :disabled="isSubmitting">
      {{ isSubmitting ? "Placing Order..." : "Place Order" }}
    </button>
  </form>
</template>

The key difference from React Hook Form: VeeValidate uses v-model binding (Vue’s two-way data binding), while React Hook Form uses register() which attaches refs. Both connect to the same Zod schema through their respective adapters.

Key Differences

ConceptASP.NET DataAnnotationsReact Hook Form + ZodVeeValidate + Zod
Validation rulesAttributes on C# classz.object() schemaz.object() schema (same)
Type generationThe model class is the typez.infer<typeof schema>z.infer<typeof schema> (same)
Form bindingasp-for tag helperregister("field")v-model="fieldValue"
Error displayasp-validation-for{errors.field?.message}{{ fieldError }}
Submit handlingController actionhandleSubmit(fn)handleSubmit(fn)
ModelStateBuilt-informState.errorserrors from useForm
Shared schemaN/A — .csproj referencesShared package / monorepoSame shared package

Gotchas for .NET Engineers

Gotcha 1: HTML inputs return strings — number fields need explicit coercion

In ASP.NET model binding, the framework converts the "42" string from the form POST to the int property automatically. In React Hook Form, <input type="number"> still returns a string from the DOM. Zod will reject "42" when the schema expects number.

// WRONG — Zod receives "42" (string) from the input, fails validation
<input type="number" {...register("quantity")} />

// CORRECT — valueAsNumber tells React Hook Form to coerce the DOM string
<input type="number" {...register("quantity", { valueAsNumber: true })} />

// ALSO CORRECT — use z.coerce.number() in the schema instead
const schema = z.object({
  quantity: z.coerce.number().int().min(1),
  // z.coerce.number() calls Number() on the value before validating
});

The valueAsNumber option in register() and z.coerce.number() in Zod are both valid approaches. Pick one per project. We use valueAsNumber on the field so the schema stays honest about its expected types.

Gotcha 2: Validation mode defaults to “onSubmit” — users see no errors until they click submit

The default mode in React Hook Form is "onSubmit". Users fill in an invalid email, tab away, and see nothing wrong until they try to submit the form. This feels broken compared to the instant inline validation your .NET forms may have provided via jQuery Unobtrusive Validation.

// Default behavior — errors appear only after first submit attempt
const form = useForm({ resolver: zodResolver(schema) });

// Better for most forms — validate when the user leaves a field
const form = useForm({
  resolver: zodResolver(schema),
  mode: "onBlur",
});

// Best for forms where correctness matters more than noise
// "onTouched" = validate onBlur, then revalidate onChange after first blur
const form = useForm({
  resolver: zodResolver(schema),
  mode: "onTouched",
});
modeWhen validation fires
"onSubmit"Only when user submits (default)
"onBlur"When user leaves a field (tab away)
"onChange"On every keystroke — can be noisy
"onTouched"onBlur first, then onChange for touched fields
"all"Both onChange and onBlur

Gotcha 3: The submit handler only fires if all fields are valid — silent prevention

In ASP.NET MVC, if ModelState.IsValid is false, you re-render the view. The developer code always runs. In React Hook Form, handleSubmit(yourFn) only calls yourFn if validation passes. If the form is invalid, it populates formState.errors and does nothing else. No error is thrown, no rejection occurs — validation failure is silent from the perspective of your handler.

// This function will NEVER be called if form is invalid
const onSubmit = async (data: FormValues) => {
  console.log("This only runs if Zod says data is valid");
  await api.create(data);
};

// If you need to know what happened when submit failed validation, listen to the second argument
<form onSubmit={form.handleSubmit(onSubmit, (errors) => {
  // errors is the formState.errors object — all current validation failures
  console.log("Form validation failed:", errors);
  analytics.track("form_validation_failed", { fields: Object.keys(errors) });
})}>

Gotcha 4: Unregistered fields are not included in the submitted data

In ASP.NET model binding, any property on the model is populated from the POST data regardless of whether there is a form field for it. In React Hook Form, only fields registered with register() or Controller appear in the data object passed to your submit handler.

// If your Zod schema has a "userId" field but you don't register it in the form,
// userId will be undefined in onSubmit's data, and Zod will either fail validation
// or return undefined depending on whether the field is optional.

// Solution: use setValue() to set programmatic values that aren't form inputs
const { register, handleSubmit, setValue } = useForm<FormValues>();

useEffect(() => {
  // Set hidden/computed fields before submission
  setValue("userId", currentUser.id);
  setValue("createdAt", new Date().toISOString());
}, [currentUser.id]);

Gotcha 5: noValidate on the form element is required to suppress browser-native validation

Browsers have built-in validation for type="email", required, min, max. Without noValidate, the browser intercepts the submit event before React Hook Form does, shows its own ugly validation tooltips, and your custom error messages never appear.

// WRONG — browser validation fights with React Hook Form
<form onSubmit={handleSubmit(onSubmit)}>

// CORRECT — browser validation disabled, React Hook Form takes full control
<form onSubmit={handleSubmit(onSubmit)} noValidate>

Hands-On Exercise

Build a two-step job application form backed by a shared Zod schema.

Step 1 — Define the schema in schemas/job-application.schema.ts:

import { z } from "zod";

export const personalInfoSchema = z.object({
  firstName: z.string().min(1, "First name is required").max(50),
  lastName: z.string().min(1, "Last name is required").max(50),
  email: z.string().email("Enter a valid email address"),
  phone: z
    .string()
    .regex(/^\+?[\d\s\-()]{10,}$/, "Enter a valid phone number")
    .optional()
    .or(z.literal("")),
});

export const experienceSchema = z.object({
  yearsOfExperience: z.coerce
    .number()
    .int()
    .min(0, "Cannot be negative")
    .max(50, "That seems like a lot"),
  currentTitle: z.string().min(1, "Current title is required"),
  coverLetter: z
    .string()
    .min(100, "Cover letter must be at least 100 characters")
    .max(2000, "Cover letter must be 2,000 characters or fewer"),
  resume: z
    .custom<FileList>()
    .refine((f) => f.length > 0, "Resume is required")
    .refine((f) => f[0]?.size <= 2 * 1024 * 1024, "Resume must be under 2MB")
    .refine(
      (f) => f[0]?.type === "application/pdf",
      "Resume must be a PDF"
    ),
});

export const jobApplicationSchema = personalInfoSchema.merge(experienceSchema);
export type JobApplicationValues = z.infer<typeof jobApplicationSchema>;

Step 2 — Build the two-step form in React. The form should:

  • Validate step one fields (firstName, lastName, email, phone) before advancing with form.trigger(["firstName", "lastName", "email", "phone"])
  • Show a character counter on the coverLetter field using a controlled Controller input
  • Display inline errors below each field as soon as the user has touched and left that field (mode: "onTouched")
  • Show a loading spinner on the submit button during submission (isSubmitting)
  • Use proper aria-invalid and aria-describedby attributes for accessibility

Step 3 — Add a refinement to the schema to check that the email domain is not a temporary email service:

.refine(
  (data) => !["mailinator.com", "guerrillamail.com"].includes(data.email.split("@")[1]),
  { message: "Use a permanent email address", path: ["email"] }
)

Step 4 — Simulate the server validation scenario. After the form submits, use React Hook Form’s setError to surface a server-side error:

const onSubmit = async (data: JobApplicationValues) => {
  try {
    await jobApi.apply(data);
  } catch (err) {
    if (err instanceof ApiError && err.code === "EMAIL_TAKEN") {
      form.setError("email", { message: "This email has already applied" });
    }
  }
};

Quick Reference

TaskReact Hook FormVeeValidate
Initialize formuseForm({ resolver: zodResolver(schema) })useForm({ validationSchema: toTypedSchema(schema) })
Register input{...register("fieldName")}v-model="fieldValue" from useField
Access errorerrors.fieldName?.messageerrorMessage from useField
Wrap submithandleSubmit(fn)handleSubmit(fn)
Set programmatic valuesetValue("field", value)setFieldValue("field", value)
Trigger validationtrigger("field") or trigger(["f1", "f2"])validate()
Watch a field valuewatch("fieldName")Reactive via v-model
Reset formreset() or reset(defaultValues)resetForm()
Surface server errorsetError("field", { message: "..." })setFieldError("field", "...")
Controlled component<Controller name="..." control={control} render={...} /><Field name="..." v-slot="{ field }" />
Submission stateformState.isSubmittingisSubmitting from useForm
Form validityformState.isValidmeta.valid
File inputregister("file") + valueAsNumber not neededuseField("file")

DataAnnotations to Zod cheat sheet

DataAnnotationZod equivalent
[Required].min(1) (string) or .min(1) (array)
[MaxLength(n)].max(n)
[MinLength(n)].min(n)
[Range(min, max)].min(min).max(max)
[EmailAddress].email()
[Url].url()
[RegularExpression(pattern)].regex(/pattern/)
[Compare("OtherField")].superRefine() or .refine() on the object
[CreditCard].regex(/^\d{16}$/) (simplified)
[Phone].regex(/^\+?[\d\s\-()+]+$/)
[EnumDataType(typeof MyEnum)]z.nativeEnum(MyEnum) or z.enum(["a","b"])
IValidatableObject.Validate().superRefine((data, ctx) => {...})
[CustomValidation].refine((val) => ..., "message")

Further Reading

Client-Side Routing and Navigation

For .NET engineers who know: ASP.NET MVC routing (MapControllerRoute, attribute routing), [Authorize], IActionFilter, Razor layouts (_Layout.cshtml) You’ll learn: How SPAs intercept URL changes to swap components without a server round-trip, and how Next.js and Nuxt structure the file system to define routes Time: 10-15 minutes

The .NET Way (What You Already Know)

In ASP.NET MVC, every URL is a server request. The browser sends an HTTP GET to /orders/42, the server matches it against its route table, executes the controller action, renders a Razor view, and returns a full HTML page. The browser then does a full page replacement: the old DOM is discarded and the new one is painted from scratch.

// Startup.cs — route table
app.MapControllerRoute(
    name: "default",
    pattern: "{controller=Home}/{action=Index}/{id?}");

// or attribute routing on the controller
[Route("orders")]
public class OrdersController : Controller
{
    [HttpGet("{id:int}")]
    public IActionResult Detail(int id)
    {
        var order = _orderService.Get(id);
        return View(order); // renders Orders/Detail.cshtml
    }
}

Route guards are middleware or action filters:

// [Authorize] attribute — redirects to /Account/Login if not authenticated
[Authorize]
[HttpGet("{id:int}")]
public IActionResult Detail(int id) { ... }

// Or globally via middleware
app.UseAuthorization();

Layouts are Razor layout files:

<!-- Views/Shared/_Layout.cshtml -->
<!DOCTYPE html>
<html>
<head><title>@ViewData["Title"]</title></head>
<body>
    <nav><!-- shared nav --></nav>
    @RenderBody()   <!-- page content goes here -->
    @RenderSection("Scripts", required: false)
</body>
</html>

This model is simple to reason about: every page is independent, the server owns the routing logic, and there is no shared JavaScript state between pages (unless you deliberately add it). The cost is latency — every navigation is a network round-trip even if 90% of the page is identical to what the user is already looking at.

The SPA Way

How Client-Side Routing Works

In a Single-Page Application, the browser loads one HTML file and one JavaScript bundle once. After that, all navigation is handled in JavaScript — no server requests for page changes.

The mechanism is the browser’s History API:

// The History API — what routing libraries use under the hood
window.history.pushState({ orderId: 42 }, "", "/orders/42");
// This changes the URL bar to /orders/42 WITHOUT triggering an HTTP request
// The browser does not reload the page

window.history.replaceState(null, "", "/orders/42");
// Same, but replaces the current history entry instead of pushing a new one
// (back button won't go back to the previous URL)

window.addEventListener("popstate", (event) => {
  // Fires when the user hits the back or forward button
  // event.state contains what you passed to pushState
  renderPageForCurrentUrl();
});

A routing library sits on top of this API:

  1. It intercepts <a> clicks and calls pushState instead of following the link normally
  2. It listens to popstate events for back/forward navigation
  3. It reads window.location.pathname and renders the matching component

The server still needs to cooperate for one scenario: if the user pastes /orders/42 into the address bar, the browser sends a real HTTP request to that path. The server must respond with the same index.html for all routes — otherwise the user gets a 404. This is why most SPA deployment configurations include a catch-all rule:

# nginx.conf — catch-all for SPAs
location / {
    try_files $uri $uri/ /index.html;
}

Next.js and Nuxt handle this automatically because they control the server.

Next.js: File-System Routing with the App Router

Next.js maps the file system directly to routes. You do not write a route table. You create files in specific directories and Next.js generates the routes from the directory structure.

app/
  page.tsx              → /
  layout.tsx            → root layout (wraps all pages)
  orders/
    page.tsx            → /orders
    layout.tsx          → layout for /orders and all children
    [id]/
      page.tsx          → /orders/42, /orders/99, etc.
      edit/
        page.tsx        → /orders/42/edit
  (auth)/               → route group — does NOT add a URL segment
    login/
      page.tsx          → /login
    register/
      page.tsx          → /register
  [...slug]/
    page.tsx            → catch-all: /anything/nested/deep

The parentheses in (auth) create a route group — a folder for organizing files that does not create a URL segment. It is the Next.js way to have a different layout for a section of the app (like unauthenticated pages) without changing the URL structure.

// app/orders/[id]/page.tsx — dynamic route
// The segment name in brackets becomes a prop named "params"

interface PageProps {
  params: { id: string }; // always string — URL params are strings
  searchParams: { [key: string]: string | string[] | undefined };
}

export default async function OrderDetailPage({ params, searchParams }: PageProps) {
  // In the App Router, page.tsx is a Server Component by default
  // You can fetch data directly here — no useEffect, no loading state
  const order = await orderService.getById(Number(params.id));

  if (!order) {
    notFound(); // renders the nearest not-found.tsx
  }

  return (
    <main>
      <h1>Order #{order.id}</h1>
      <p>Customer: {order.customerName}</p>
    </main>
  );
}

// Generate static paths for SSG (Article 3.12)
export async function generateStaticParams() {
  const orders = await orderService.getAll();
  return orders.map((order) => ({ id: String(order.id) }));
}
// app/layout.tsx — root layout, equivalent to _Layout.cshtml
import type { ReactNode } from "react";

interface LayoutProps {
  children: ReactNode;
}

export default function RootLayout({ children }: LayoutProps) {
  return (
    <html lang="en">
      <body>
        <nav>
          {/* Navigation renders once and persists across all page navigations */}
          <NavBar />
        </nav>
        <main>{children}</main>
      </body>
    </html>
  );
}

Layouts are persistent — unlike Razor’s _Layout.cshtml which re-renders the entire layout HTML on every page request, Next.js layouts are mounted once and stay in the DOM as you navigate between child routes. React maintains their state. This is why navigation feels instant after the initial load.

Nuxt: File-System Routing for Vue

Nuxt uses the same file-system convention, inside a pages/ directory:

pages/
  index.vue             → /
  orders/
    index.vue           → /orders
    [id].vue            → /orders/42
    [id]/
      edit.vue          → /orders/42/edit
  [...slug].vue         → catch-all
layouts/
  default.vue           → applied to all pages
  auth.vue              → alternative layout for auth pages
<!-- pages/orders/[id].vue -->
<script setup lang="ts">
const route = useRoute();
const id = Number(route.params.id);

// useAsyncData is Nuxt's data fetching primitive for SSR
const { data: order, error } = await useAsyncData(
  `order-${id}`,
  () => $fetch(`/api/orders/${id}`)
);
</script>

<template>
  <div v-if="order">
    <h1>Order #{{ order.id }}</h1>
    <p>Customer: {{ order.customerName }}</p>
  </div>
  <div v-else-if="error">Failed to load order.</div>
</template>

Dynamic Routes and Catch-All Routes

PatternNext.js fileNuxt fileMatches
Staticapp/about/page.tsxpages/about.vue/about
Dynamic segmentapp/orders/[id]/page.tsxpages/orders/[id].vue/orders/42
Optional dynamicapp/orders/[[id]]/page.tsxpages/orders/[[id]].vue/orders and /orders/42
Catch-allapp/docs/[...slug]/page.tsxpages/docs/[...slug].vue/docs/a/b/c
Optional catch-allapp/docs/[[...slug]]/page.tsxpages/docs/[[...slug]].vue/docs and /docs/a/b/c

Route Parameters and Query Strings

// Next.js — reading route params and query strings in a page component
export default function OrderPage({
  params,
  searchParams,
}: {
  params: { id: string };
  searchParams: { tab?: string; page?: string };
}) {
  const orderId = Number(params.id);              // /orders/42 → 42
  const activeTab = searchParams.tab ?? "details"; // ?tab=history → "history"
  const page = Number(searchParams.page ?? "1");   // ?page=3 → 3

  return <OrderDetail id={orderId} tab={activeTab} page={page} />;
}
// In a Client Component — use the useSearchParams hook
"use client";
import { useSearchParams, useParams } from "next/navigation";

function OrderTabs() {
  const params = useParams<{ id: string }>();
  const searchParams = useSearchParams();

  const tab = searchParams.get("tab") ?? "details";
  const orderId = Number(params.id);

  return <TabBar active={tab} orderId={orderId} />;
}

In HTML, <a href="/orders"> causes a full page reload. In Next.js and Nuxt, you use framework-provided components that intercept the click and use the History API instead:

// Next.js — Link component (preferred for navigation users can see)
import Link from "next/link";

function OrderList({ orders }: { orders: Order[] }) {
  return (
    <ul>
      {orders.map((order) => (
        <li key={order.id}>
          {/* Next.js prefetches this route when the link is visible in the viewport */}
          <Link href={`/orders/${order.id}`}>Order #{order.id}</Link>
        </li>
      ))}
    </ul>
  );
}
// Next.js — useRouter for programmatic navigation (after form submission, etc.)
"use client";
import { useRouter } from "next/navigation";

function CreateOrderForm() {
  const router = useRouter();

  const onSubmit = async (data: CreateOrderFormValues) => {
    const order = await orderApi.create(data);

    // Programmatic navigation — equivalent to Response.Redirect() in C#
    router.push(`/orders/${order.id}`);

    // router.replace() — does not add to history (like RedirectToAction with replace)
    router.replace("/dashboard");

    // router.back() — equivalent to history.go(-1)
    router.back();

    // router.refresh() — re-fetches server data for current route without full reload
    router.refresh();
  };
}
<!-- Nuxt — NuxtLink and useRouter -->
<template>
  <ul>
    <li v-for="order in orders" :key="order.id">
      <NuxtLink :to="`/orders/${order.id}`">Order #{{ order.id }}</NuxtLink>
    </li>
  </ul>
</template>

<script setup lang="ts">
const router = useRouter();

async function createAndNavigate(data: CreateOrderFormValues) {
  const order = await orderApi.create(data);
  await router.push(`/orders/${order.id}`);
}
</script>

Route Guards: Middleware (the [Authorize] equivalent)

In ASP.NET, [Authorize] is an attribute on controllers or actions. In Next.js, route protection is handled by middleware — a file at the root of your project that runs before every request:

// middleware.ts — runs on every matching request, before the page renders
import { NextResponse } from "next/server";
import type { NextRequest } from "next/server";

export function middleware(request: NextRequest) {
  const token = request.cookies.get("auth-token")?.value;
  const isAuthRoute = request.nextUrl.pathname.startsWith("/auth");
  const isApiRoute = request.nextUrl.pathname.startsWith("/api");

  // If no token and trying to access a protected route
  if (!token && !isAuthRoute && !isApiRoute) {
    const loginUrl = new URL("/auth/login", request.url);
    loginUrl.searchParams.set("returnUrl", request.nextUrl.pathname);
    return NextResponse.redirect(loginUrl);
    // Equivalent to [Authorize]'s redirect to /Account/Login?ReturnUrl=...
  }

  // If has token and trying to access auth pages, redirect to dashboard
  if (token && isAuthRoute) {
    return NextResponse.redirect(new URL("/dashboard", request.url));
  }

  return NextResponse.next(); // allow the request through
}

// Configure which paths the middleware runs on
export const config = {
  matcher: [
    // Exclude static files, images, Next.js internals
    "/((?!_next/static|_next/image|favicon.ico).*)",
  ],
};

For more granular per-page authorization (checking specific roles, not just “is logged in”), handle it in the page component itself:

// app/admin/page.tsx — page-level authorization check
import { redirect } from "next/navigation";
import { getCurrentUser } from "@/lib/auth";

export default async function AdminPage() {
  const user = await getCurrentUser();

  if (!user) {
    redirect("/auth/login");
  }

  if (!user.roles.includes("admin")) {
    redirect("/unauthorized");
  }

  return <AdminDashboard user={user} />;
}

In Nuxt, route middleware lives in the middleware/ directory:

// middleware/auth.ts
export default defineNuxtRouteMiddleware((to, from) => {
  const authStore = useAuthStore();

  if (!authStore.isAuthenticated) {
    return navigateTo(`/login?returnUrl=${to.fullPath}`);
  }
});
<!-- Apply middleware to a specific page -->
<script setup>
definePageMeta({
  middleware: ["auth"],  // runs the auth middleware before this page renders
});
</script>

Nested Routes and Layouts

In Next.js, every layout.tsx in a directory wraps all routes nested inside it. This creates nested layouts that can share state:

app/
  layout.tsx           → outer layout (html, body, NavBar)
  orders/
    layout.tsx         → inner layout for /orders/* (sidebar, breadcrumbs)
    page.tsx           → /orders — rendered inside both layouts
    [id]/
      page.tsx         → /orders/42 — also rendered inside both layouts
// app/orders/layout.tsx — layout for the /orders section
export default function OrdersLayout({ children }: { children: React.ReactNode }) {
  return (
    <div className="orders-section">
      <aside>
        <OrdersSidebar />  {/* persists as you navigate between /orders pages */}
      </aside>
      <section>{children}</section>
    </div>
  );
}

This nesting is more powerful than Razor’s _Layout.cshtml inheritance: each layout mounts once and maintains React state. An accordion open in OrdersSidebar stays open as the user navigates from /orders to /orders/42.

Key Differences

ConceptASP.NET MVCNext.js (App Router)Nuxt
Route definitionMapControllerRoute or attributesFile system (app/**/*.tsx)File system (pages/**/*.vue)
Dynamic segment{id} in route template[id] folder/file name[id].vue file name
Catch-all{*path}[...slug][...slug].vue
Layout_Layout.cshtmllayout.tsx (persists in DOM)layouts/default.vue
Nested layout_ViewStart + layout inheritanceNested layout.tsx filesNested layouts via <NuxtLayout>
Route guard[Authorize] attributemiddleware.ts or page-level redirectmiddleware/ directory
Programmatic navreturn RedirectToAction(...)router.push(url)navigateTo(url)
Link component<a asp-action="..."><Link href="..."><NuxtLink :to="...">
Route paramsint id method parameterparams.id prop (string)route.params.id (string)
Query stringstring? tab method parametersearchParams.tab proproute.query.tab
Full page reloadAlwaysNever (client-side navigation)Never (client-side navigation)

Gotchas for .NET Engineers

Gotcha 1: Route parameters are always strings

In ASP.NET, route parameters are automatically converted to the method parameter’s declared type. int id in a controller action receives 42 as an integer. In Next.js and Nuxt, params.id is always a string. You must convert it yourself.

// WRONG — comparison will always fail (42 !== "42")
const order = orders.find((o) => o.id === params.id);

// CORRECT — convert to the expected type
const orderId = Number(params.id);
// Or, to guard against NaN:
const orderId = parseInt(params.id, 10);
if (isNaN(orderId)) notFound();

const order = orders.find((o) => o.id === orderId);

Gotcha 2: SEO and the “window is not defined” problem

Client-side routing means the server initially sends a nearly empty HTML file, and the browser renders everything with JavaScript. Search engine crawlers may not execute JavaScript, so they see little content. This is why pure SPAs (like Create React App) rank poorly for content pages.

Next.js and Nuxt solve this with Server-Side Rendering (covered in Article 3.12). But even in SSR mode, code that references browser globals (window, document, localStorage) will crash when it runs on the server — because those globals do not exist in Node.js.

// WRONG — crashes on the server
function getStoredTheme() {
  return localStorage.getItem("theme"); // ReferenceError: localStorage is not defined
}

// CORRECT — guard with typeof check
function getStoredTheme() {
  if (typeof window === "undefined") return "light"; // server-side fallback
  return localStorage.getItem("theme") ?? "light";
}

// CORRECT — use useEffect which only runs in the browser
"use client";
import { useState, useEffect } from "react";

function ThemeToggle() {
  const [theme, setTheme] = useState("light");

  useEffect(() => {
    // This only runs in the browser, after hydration
    setTheme(localStorage.getItem("theme") ?? "light");
  }, []);
}

Next.js’s <Link> component automatically prefetches the linked route’s JavaScript when the link enters the viewport. This is a performance optimization — navigation feels instant because the code is already downloaded. But on pages with hundreds of links, this can cause significant bandwidth usage and CPU time on slow devices.

// Disable prefetching for links that are rarely clicked
<Link href="/admin/audit-log" prefetch={false}>
  Audit Log
</Link>

// The prefetch behavior in the App Router:
// - In production: prefetches when link is in the viewport
// - In development: no prefetching (to avoid slowdowns while iterating)

Gotcha 4: Client Components and Server Components are not interchangeable

In Next.js’s App Router, components are Server Components by default. Server Components run on the server only and cannot use React hooks or browser APIs. To opt into client-side rendering, add "use client" at the top of the file.

// app/orders/page.tsx — Server Component (no "use client")
// Can use async/await, fetch(), server-only imports
// Cannot use useState, useEffect, onClick handlers, or browser APIs

export default async function OrdersPage() {
  const orders = await orderService.getAll(); // direct DB call OK here
  return <OrderList orders={orders} />;
}
// components/OrderList.tsx — Client Component
"use client";
// Can use useState, useEffect, onClick, useRouter, etc.
// Cannot use server-only imports (db clients, fs, etc.)

import { useState } from "react";

export function OrderList({ orders }: { orders: Order[] }) {
  const [selected, setSelected] = useState<number | null>(null);

  return (
    <ul>
      {orders.map((o) => (
        <li key={o.id} onClick={() => setSelected(o.id)}>
          {o.id === selected ? "[selected] " : ""}{o.customerName}
        </li>
      ))}
    </ul>
  );
}

The "use client" directive is not “turn off server rendering for this page.” It is “this component requires access to browser APIs or React state.” The component still gets server-rendered HTML for SEO and then hydrated on the client. Think of it more like a capabilities declaration.

Hands-On Exercise

Start with a fresh Next.js project (npx create-next-app@latest --typescript) and implement the following routing structure:

Route structure to build:

/                        → home page with links to Orders and Users
/orders                  → list of mock orders
/orders/[id]             → detail page for a single order
/orders/[id]/edit        → edit form (protected — redirect to /login if not logged in)
/users                   → list of users (uses the same layout as /orders)
/login                   → login page

Tasks:

  1. Create a layouts/app/layout.tsx that renders a persistent <NavBar> with links to /orders and /users. Verify that the NavBar does not re-mount when navigating between those sections (add a console.log("NavBar mounted") in a useEffect).

  2. Create app/orders/[id]/page.tsx. Access params.id, convert it to a number, and render it. Add a notFound() call if id is NaN.

  3. Create a route guard: add middleware.ts that checks for a logged-in cookie. If the cookie is missing and the requested path is /orders/[id]/edit, redirect to /login?returnUrl=/orders/[id]/edit.

  4. On the /login page, set the cookie via a form submit (just a client-side document.cookie for this exercise) and then redirect to the returnUrl query parameter value.

  5. Verify the window is not defined problem: try accessing localStorage directly in a Server Component, observe the error, then move the code into a "use client" component with a useEffect.

Quick Reference

TaskNext.jsNuxt
Navigate with link<Link href="/path">text</Link><NuxtLink to="/path">text</NuxtLink>
Navigate programmaticallyrouter.push("/path")navigateTo("/path")
Replace current history entryrouter.replace("/path")navigateTo("/path", { replace: true })
Go backrouter.back()router.back()
Read route paramparams.id (Server Component) / useParams() (Client)useRoute().params.id
Read query stringsearchParams.tab (Server) / useSearchParams() (Client)useRoute().query.tab
Redirect in server coderedirect("/path") from next/navigationnavigateTo("/path") in middleware
Route middleware / guardmiddleware.ts at project rootmiddleware/name.ts
Apply middleware to pageConfigured in matcher or globallydefinePageMeta({ middleware: ["name"] })
Layout for a sectionlayout.tsx in the directorylayouts/name.vue + definePageMeta
Catch-all route[...slug] folder[...slug].vue
404 pagenot-found.tsxerror.vue
Generate static pathsgenerateStaticParams() exportnitro.routeRules or useAsyncData
Opt into client rendering"use client" at top of fileAll .vue components are client-capable
Prefetch linkDefault in <Link>Default in <NuxtLink>
Disable prefetch<Link prefetch={false}><NuxtLink no-prefetch>

Further Reading

Data Fetching Patterns: HttpClient vs. fetch / axios / TanStack Query

For .NET engineers who know: HttpClient, IHttpClientFactory, DelegatingHandler, System.Text.Json, typed HTTP clients, and Polly for resilience You’ll learn: How the JS/TS data fetching stack layers from raw fetch up through TanStack Query — and why TanStack Query’s caching model changes how you think about server state in the UI Time: 15-20 minutes

The .NET Way (What You Already Know)

In .NET, HttpClient is the standard HTTP client, registered through IHttpClientFactory to manage connection pooling and handle socket exhaustion. You configure it in Program.cs, inject it by interface, and it arrives typed and ready:

// Program.cs — register a typed HttpClient
builder.Services.AddHttpClient<IOrderApiClient, OrderApiClient>(client =>
{
    client.BaseAddress = new Uri("https://api.example.com");
    client.DefaultRequestHeaders.Add("Accept", "application/json");
})
.AddHttpMessageHandler<AuthTokenHandler>()  // DelegatingHandler for auth
.AddTransientHttpErrorPolicy(policy =>      // Polly: retry on 5xx and network errors
    policy.WaitAndRetryAsync(3, _ => TimeSpan.FromSeconds(1)));

// The typed client
public class OrderApiClient : IOrderApiClient
{
    private readonly HttpClient _client;

    public OrderApiClient(HttpClient client)
    {
        _client = client;
    }

    public async Task<Order?> GetOrderAsync(int id)
    {
        var response = await _client.GetAsync($"/api/orders/{id}");
        response.EnsureSuccessStatusCode(); // throws on 4xx, 5xx
        return await response.Content.ReadFromJsonAsync<Order>();
    }

    public async Task<IReadOnlyList<Order>> GetOrdersAsync(int userId)
    {
        return await _client.GetFromJsonAsync<List<Order>>($"/api/orders?userId={userId}")
            ?? [];
    }
}

// DelegatingHandler for auth tokens — middleware pattern for HttpClient
public class AuthTokenHandler : DelegatingHandler
{
    private readonly ITokenProvider _tokenProvider;

    public AuthTokenHandler(ITokenProvider tokenProvider)
    {
        _tokenProvider = tokenProvider;
    }

    protected override async Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage request,
        CancellationToken cancellationToken)
    {
        var token = await _tokenProvider.GetTokenAsync();
        request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", token);
        return await base.SendAsync(request, cancellationToken);
    }
}

Key behaviors your code relies on:

  • EnsureSuccessStatusCode() throws on non-2xx responses — you always know when a request failed
  • ReadFromJsonAsync<T>() deserializes with System.Text.Json — typed deserialization
  • DelegatingHandler intercepts every request in the pipeline — one place for auth, logging, retry
  • Polly handles transient failures — retry policies configured once, applied everywhere

The JavaScript/TypeScript Way

The Foundation: The Fetch API

fetch is the browser’s built-in HTTP client, available in all modern browsers and in Node.js 18+. It is the lowest-level tool in the stack — the equivalent of creating an HttpClient instance directly without IHttpClientFactory, without typed deserialization, and without automatic error throwing.

// Basic GET request
const response = await fetch("https://api.example.com/orders/42");
const order: Order = await response.json();

// That looks simple. Here is what it does NOT do that HttpClient does:

The critical difference from HttpClient: fetch does not throw on HTTP errors. A 404, a 500, a 403 — all of these resolve successfully. The response.ok property and response.status tell you what happened, but you have to check them yourself.

// WRONG — this will not throw on 404 or 500
const response = await fetch("/api/orders/999");
const order = await response.json(); // may parse an error body, not an order

// CORRECT — check response.ok before deserializing
async function fetchOrder(id: number): Promise<Order> {
  const response = await fetch(`/api/orders/${id}`);

  if (!response.ok) {
    // response.status: 404, 500, 403, etc.
    // response.statusText: "Not Found", "Internal Server Error", etc.
    throw new Error(`HTTP ${response.status}: ${response.statusText}`);
  }

  return response.json() as Promise<Order>;
  // No automatic type validation — if the API returns wrong shape, TypeScript won't catch it at runtime
}
// POST with JSON body
async function createOrder(data: CreateOrderRequest): Promise<Order> {
  const response = await fetch("/api/orders", {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      Authorization: `Bearer ${token}`,
    },
    body: JSON.stringify(data),
  });

  if (!response.ok) {
    const errorBody = await response.json().catch(() => ({}));
    throw new ApiError(response.status, errorBody.message ?? "Request failed");
  }

  return response.json();
}

fetch is low-level and verbose for production use. You can use it directly in server-side code (Next.js server components, API routes) where you control the environment, but in client-side code you almost always layer on top of it.

Axios: A Better HttpClient Wrapper

Axios is the closest analog to HttpClient in JavaScript — it wraps fetch (or XMLHttpRequest in older environments) and adds the behaviors you expect from .NET:

  • Throws on non-2xx responses automatically
  • Serializes request body to JSON automatically
  • Deserializes response body from JSON automatically
  • Interceptors equivalent to DelegatingHandler
  • Configurable base URL and default headers
npm install axios
// lib/api-client.ts — configured Axios instance
import axios, { AxiosError } from "axios";

export const apiClient = axios.create({
  baseURL: process.env.NEXT_PUBLIC_API_URL ?? "https://api.example.com",
  headers: {
    "Content-Type": "application/json",
  },
  timeout: 10_000, // 10 seconds — like HttpClient.Timeout
});

// Request interceptor — equivalent to DelegatingHandler for outgoing requests
apiClient.interceptors.request.use((config) => {
  const token = authStore.getToken();
  if (token) {
    config.headers.Authorization = `Bearer ${token}`;
  }
  return config;
});

// Response interceptor — equivalent to DelegatingHandler for incoming responses
apiClient.interceptors.response.use(
  (response) => response, // pass through on success
  async (error: AxiosError) => {
    if (error.response?.status === 401) {
      // Token expired — try to refresh
      const refreshed = await authStore.refreshToken();
      if (refreshed && error.config) {
        // Retry the original request with the new token
        error.config.headers.Authorization = `Bearer ${authStore.getToken()}`;
        return apiClient.request(error.config);
      }
      authStore.logout();
    }
    return Promise.reject(error);
  }
);
// Typed API functions using the configured client
export const orderApi = {
  async getById(id: number): Promise<Order> {
    const { data } = await apiClient.get<Order>(`/orders/${id}`);
    return data;
    // axios throws on non-2xx — no manual response.ok check needed
    // data is typed as Order — but still no runtime type validation
  },

  async list(userId: number): Promise<Order[]> {
    const { data } = await apiClient.get<Order[]>("/orders", {
      params: { userId }, // appends as ?userId=42
    });
    return data;
  },

  async create(payload: CreateOrderRequest): Promise<Order> {
    const { data } = await apiClient.post<Order>("/orders", payload);
    return data;
  },

  async update(id: number, payload: UpdateOrderRequest): Promise<Order> {
    const { data } = await apiClient.put<Order>(`/orders/${id}`, payload);
    return data;
  },

  async delete(id: number): Promise<void> {
    await apiClient.delete(`/orders/${id}`);
  },
};

Axios is sufficient for server-to-server HTTP calls and for simple client applications. Where it falls short is managing server state in the UI — caching responses, knowing when data is stale, showing loading states, deduplicating parallel requests for the same data, and handling background refetching. That is where TanStack Query fits.

TanStack Query: Server State Management

TanStack Query (formerly React Query, with a Vue adapter called @tanstack/vue-query) is not an HTTP client. It sits above your HTTP client — you bring your own fetching function (Axios, fetch, or anything else) and TanStack Query manages what to do with the results:

  • Caches responses and returns the cached value immediately on subsequent requests
  • Deduplicates simultaneous requests for the same data (if ten components mount at once and all want /orders/42, only one HTTP request goes out)
  • Refetches in the background when the cached data is stale (configurable)
  • Manages loading, error, and success states
  • Handles optimistic updates (show the change immediately, roll back on failure)
  • Synchronizes with browser focus and network reconnection events
npm install @tanstack/react-query
# For Vue:
npm install @tanstack/vue-query

Setup

// app/providers.tsx — React setup (must wrap your application)
"use client";
import { QueryClient, QueryClientProvider } from "@tanstack/react-query";
import { ReactQueryDevtools } from "@tanstack/react-query-devtools";
import { useState } from "react";

export function Providers({ children }: { children: React.ReactNode }) {
  const [queryClient] = useState(
    () =>
      new QueryClient({
        defaultOptions: {
          queries: {
            staleTime: 60 * 1000,       // data stays fresh for 1 minute
            gcTime: 5 * 60 * 1000,      // unused cache entries cleared after 5 minutes
            retry: 2,                    // retry failed queries twice (like Polly)
            refetchOnWindowFocus: true,  // refetch when user returns to the tab
          },
        },
      })
  );

  return (
    <QueryClientProvider client={queryClient}>
      {children}
      <ReactQueryDevtools initialIsOpen={false} />
    </QueryClientProvider>
  );
}

Queries: Reading Data

A query fetches and caches data. The queryKey is the cache key — an array that uniquely identifies this data. Think of it as the string key in IMemoryCache.GetOrCreateAsync(key, ...):

// hooks/useOrder.ts
import { useQuery } from "@tanstack/react-query";
import { orderApi } from "../lib/api-client";

export function useOrder(id: number) {
  return useQuery({
    queryKey: ["orders", id],    // cache key — ["orders", 42] is distinct from ["orders", 99]
    queryFn: () => orderApi.getById(id),
    enabled: id > 0,             // only fetch if we have a valid id
    staleTime: 30 * 1000,        // override default: this data stays fresh for 30s
  });
}

export function useOrders(userId: number) {
  return useQuery({
    queryKey: ["orders", { userId }],  // objects work — compared by deep equality
    queryFn: () => orderApi.list(userId),
  });
}
// components/OrderDetail.tsx — consuming the query
export function OrderDetail({ id }: { id: number }) {
  const { data: order, isLoading, isError, error, isFetching } = useOrder(id);
  // isLoading: true on first load (no cached data)
  // isFetching: true whenever a request is in-flight (including background refetches)
  // isError: true if the query failed
  // data: the Order, or undefined if not yet loaded

  if (isLoading) return <div>Loading order...</div>;
  if (isError) return <div>Error: {error.message}</div>;
  if (!order) return null;

  return (
    <div>
      <h1>Order #{order.id}</h1>
      {isFetching && <span>Refreshing...</span>}
      <p>{order.customerName}</p>
    </div>
  );
}

When two components both call useOrder(42) — say, a detail panel and a breadcrumb — TanStack Query deduplicates the request and shares the cached result. In .NET, you would manually implement this with IMemoryCache. Here it is automatic.

Stale-While-Revalidate

The caching model is stale-while-revalidate: data is served from cache immediately (fast), and if the cached data is older than staleTime, a background request is fired to refresh it. The user never waits for a spinner on subsequent visits to the same page.

// The lifecycle of a query:
//
// 1. First call: no cache → isLoading: true → HTTP request → data cached → isLoading: false
// 2. Same component re-mounts within staleTime: cache hit → data returned immediately, no request
// 3. Same component re-mounts after staleTime: cache hit → data returned immediately
//    AND a background request is fired (isFetching: true, isLoading: false)
// 4. User focuses the tab after being away: background refetch triggered

Mutations: Writing Data

A mutation is any operation that changes server state — POST, PUT, PATCH, DELETE. Mutations pair with cache invalidation: after a successful mutation, you tell TanStack Query to throw away cached data so it refetches fresh state:

// hooks/useCreateOrder.ts
import { useMutation, useQueryClient } from "@tanstack/react-query";
import { orderApi } from "../lib/api-client";
import type { CreateOrderRequest } from "../schemas/order.schema";

export function useCreateOrder() {
  const queryClient = useQueryClient();

  return useMutation({
    mutationFn: (data: CreateOrderRequest) => orderApi.create(data),

    onSuccess: (newOrder) => {
      // Invalidate the orders list so it refetches with the new order included
      queryClient.invalidateQueries({ queryKey: ["orders"] });
      // This is like calling Response.Redirect() to refresh data after a POST

      // Optionally, seed the cache for the new order's detail page
      queryClient.setQueryData(["orders", newOrder.id], newOrder);
    },

    onError: (error) => {
      console.error("Order creation failed:", error);
    },
  });
}
// components/CreateOrderForm.tsx
export function CreateOrderForm() {
  const createOrder = useCreateOrder();

  const onSubmit = async (data: CreateOrderFormValues) => {
    await createOrder.mutateAsync(data);
    // mutateAsync throws on error — wraps in try/catch
    // mutate(data) is fire-and-forget — does not throw, use onError instead
    router.push(`/orders/${createOrder.data?.id}`);
  };

  return (
    <form onSubmit={handleSubmit(onSubmit)}>
      {/* ... form fields ... */}
      <button type="submit" disabled={createOrder.isPending}>
        {createOrder.isPending ? "Creating..." : "Create Order"}
      </button>
      {createOrder.isError && (
        <div role="alert">{createOrder.error.message}</div>
      )}
    </form>
  );
}

Optimistic Updates

Optimistic updates show the change in the UI immediately before the server confirms it — then roll back if the server returns an error. This is the pattern that makes UIs feel responsive at any network speed.

export function useDeleteOrder() {
  const queryClient = useQueryClient();

  return useMutation({
    mutationFn: (id: number) => orderApi.delete(id),

    onMutate: async (id) => {
      // Cancel any in-flight refetches (prevents race conditions)
      await queryClient.cancelQueries({ queryKey: ["orders"] });

      // Snapshot the current cache value
      const previousOrders = queryClient.getQueryData<Order[]>(["orders"]);

      // Optimistically update the cache — remove the order immediately
      queryClient.setQueryData<Order[]>(["orders"], (old) =>
        old?.filter((o) => o.id !== id) ?? []
      );

      // Return snapshot for rollback in onError
      return { previousOrders };
    },

    onError: (_error, _id, context) => {
      // Roll back to the snapshot if the mutation fails
      if (context?.previousOrders) {
        queryClient.setQueryData(["orders"], context.previousOrders);
      }
    },

    onSettled: () => {
      // Always refetch after mutation settles (success or failure)
      queryClient.invalidateQueries({ queryKey: ["orders"] });
    },
  });
}

Dependent Queries

Sometimes one query depends on the result of another — like fetching a user’s settings only after fetching the user:

function useUserSettings(userId: number | undefined) {
  const userQuery = useQuery({
    queryKey: ["users", userId],
    queryFn: () => userApi.getById(userId!),
    enabled: userId !== undefined,
  });

  const settingsQuery = useQuery({
    queryKey: ["settings", userQuery.data?.settingsId],
    queryFn: () => settingsApi.getById(userQuery.data!.settingsId),
    enabled: userQuery.data?.settingsId !== undefined, // only runs after user is loaded
  });

  return { user: userQuery.data, settings: settingsQuery.data };
}

In .NET you would await these sequentially in a service method. TanStack Query handles the dependency declaratively — you describe what depends on what, and it manages the sequencing and caching.

Vue Integration

// Nuxt / Vue 3 — setup in app.vue or a plugin
import { VueQueryPlugin, QueryClient } from "@tanstack/vue-query";

const queryClient = new QueryClient({
  defaultOptions: { queries: { staleTime: 60_000 } },
});
app.use(VueQueryPlugin, { queryClient });
<!-- components/OrderDetail.vue -->
<script setup lang="ts">
import { useQuery } from "@tanstack/vue-query";
import { orderApi } from "@/lib/api-client";

const props = defineProps<{ id: number }>();

const { data: order, isLoading, isError, error } = useQuery({
  queryKey: computed(() => ["orders", props.id]),
  queryFn: () => orderApi.getById(props.id),
});
</script>

<template>
  <div v-if="isLoading">Loading...</div>
  <div v-else-if="isError">Error: {{ error?.message }}</div>
  <div v-else-if="order">
    <h1>Order #{{ order.id }}</h1>
    <p>{{ order.customerName }}</p>
  </div>
</template>

Key Differences

Concept.NET HttpClientfetch APIAxiosTanStack Query
Error on non-2xxEnsureSuccessStatusCode() manualManual if (!response.ok)AutomaticVia your fetcher
JSON deserializationReadFromJsonAsync<T>()await response.json() (untyped){ data } (TypeScript typed)Via your fetcher
Request interceptorsDelegatingHandlerManual wrapperinterceptors.request.use()Not applicable
Response interceptorsDelegatingHandlerManual wrapperinterceptors.response.use()onError callbacks
Retry logicPollyManualAxios retry pluginBuilt-in retry option
CachingIMemoryCache (separate)NoneNoneBuilt-in, automatic
Loading stateManual bool loadingManualManualisLoading, isFetching
DeduplicationNoneNoneNoneAutomatic
Background refreshNoneNoneNonestaleTime + refetch
Optimistic updatesManualManualManualonMutate + rollback
Cache invalidationManualNoneNoneinvalidateQueries()
DI registrationIHttpClientFactoryModule importModule importQueryClientProvider

Gotchas for .NET Engineers

Gotcha 1: fetch does not throw on HTTP errors — ever

This is the single biggest footgun for engineers coming from HttpClient. In .NET, non-2xx responses throw HttpRequestException unless you catch it. With fetch, you check response.ok manually or you silently receive the error response body and try to parse it as your expected type.

// This compiles. It runs. It is wrong.
const response = await fetch("/api/orders/99999");
const order: Order = await response.json();
// If the response was 404, you just parsed { "type": "NotFound", "title": "Not Found", ... }
// as an Order. TypeScript won't warn you — the cast is just a lie at runtime.

console.log(order.customerName); // undefined — no runtime error, silent data corruption

The fix: always check response.ok, or use Axios, or write a wrapper:

async function apiFetch<T>(url: string, options?: RequestInit): Promise<T> {
  const response = await fetch(url, options);
  if (!response.ok) {
    const body = await response.json().catch(() => ({}));
    throw new ApiError(response.status, body.detail ?? response.statusText);
  }
  return response.json() as Promise<T>;
}

Gotcha 2: TypeScript generics on fetch/axios do not validate at runtime

axios.get<Order>(url) and (await fetch(url)).json() as Order are compile-time assertions only. If your API changes its response shape, TypeScript will not protect you at runtime. You get undefined on fields that no longer exist, or worse, old values on renamed fields.

// This type annotation is a promise to TypeScript, not a runtime guarantee
const { data } = await apiClient.get<Order>("/orders/42");
// If the API now returns { order_id instead of id }, data.id is undefined at runtime
// TypeScript sees it as Order and is satisfied — it trusted your generic

// Solution: parse with Zod for runtime safety
const rawData = await apiClient.get("/orders/42");
const order = orderSchema.parse(rawData.data); // throws ZodError if shape is wrong

For internal APIs where you control both sides, the TypeScript generic is usually sufficient — mismatches will be caught quickly. For third-party APIs or where the backend team is independent, add Zod parsing on the response.

Gotcha 3: Query keys must be stable — objects and arrays have reference equality gotchas

TanStack Query compares query keys using deep equality, so ["orders", { userId: 1 }] and ["orders", { userId: 1 }] are the same key even though they are different object references. However, keys must not include unstable references like function definitions or class instances:

// WRONG — a new object is created on every render, but TanStack Query handles it correctly
// (because it deep-compares) — this is fine for objects and primitives

// WRONG — including a function in the key is not meaningful and not comparable
queryKey: ["orders", filterFn]; // filterFn is a reference — do not do this

// CORRECT — use the function's outputs in the key, not the function itself
queryKey: ["orders", { status: "active", page: 1 }];

// WRONG — including a Date object without serializing it
queryKey: ["orders", new Date()]; // creates a new Date every render — causes refetch storm

// CORRECT — serialize the date
queryKey: ["orders", dateFilter.toISOString()];

A good convention: every key segment should be a string, number, boolean, or a plain object/array of those. No class instances, no functions.

Gotcha 4: invalidateQueries matches by prefix — invalidating too broadly is easy

queryClient.invalidateQueries({ queryKey: ["orders"] }) invalidates every query whose key starts with ["orders"]. This includes ["orders", 42], ["orders", { userId: 1 }], and ["orders", "recent"]. If you meant to only invalidate a specific order’s cache, be precise:

// Invalidates ALL orders queries — may trigger more refetches than expected
queryClient.invalidateQueries({ queryKey: ["orders"] });

// Invalidates only the specific order — use when you know exactly what changed
queryClient.invalidateQueries({ queryKey: ["orders", id], exact: true });

// Invalidates all user-specific orders lists
queryClient.invalidateQueries({ queryKey: ["orders", { userId }] });

After a successful create, invalidating all ["orders"] is usually correct — the list needs to show the new item. After a successful update to a specific order, exact: true on that order’s key is more efficient.

Gotcha 5: Mutations do not automatically invalidate the cache

Coming from .NET, you might expect that after a POST/PUT/DELETE, any component displaying related data would automatically refresh. TanStack Query does not do this automatically — you must invalidate the relevant queries in onSuccess. If you forget, the UI shows stale data indefinitely.

// WRONG — cache is not updated after mutation
useMutation({
  mutationFn: (data: UpdateOrderRequest) => orderApi.update(id, data),
  // No onSuccess — the UI still shows the old data after this runs
});

// CORRECT — invalidate related queries after mutation succeeds
useMutation({
  mutationFn: (data: UpdateOrderRequest) => orderApi.update(id, data),
  onSuccess: (updatedOrder) => {
    queryClient.invalidateQueries({ queryKey: ["orders"] });
    // Or, more surgical — set the cache directly to avoid a refetch
    queryClient.setQueryData(["orders", id], updatedOrder);
  },
});

Hands-On Exercise

Build a complete data layer for an order management feature using Axios + TanStack Query.

Setup:

npm install @tanstack/react-query @tanstack/react-query-devtools axios

Task 1 — Configure the Axios client in lib/api-client.ts:

  • Base URL from process.env.NEXT_PUBLIC_API_URL
  • Request interceptor that reads an auth token from localStorage (key: "auth-token") and adds Authorization: Bearer <token> header
  • Response interceptor that logs all 4xx and 5xx responses to console.error with the status code and URL
  • 10-second timeout

Task 2 — Write the query hooks in hooks/use-orders.ts:

// Implement these three hooks:
export function useOrders(userId: number) { ... }
// queryKey: ["orders", { userId }]
// staleTime: 2 minutes (order lists don't change that often)

export function useOrder(id: number) { ... }
// queryKey: ["orders", id]
// enabled: only when id > 0

export function useCreateOrder() { ... }
// On success: invalidate ["orders"] queries AND set the new order in cache
// Hint: use mutateAsync in your form, mutate if you don't need to await

Task 3 — Build the OrderList component that:

  • Shows a skeleton loader when isLoading is true (three placeholder <div> elements with a pulse animation)
  • Shows an error message with a “Try again” button that calls refetch() when isError is true
  • Shows the list of orders when data is available
  • Displays a subtle “Refreshing…” indicator when isFetching is true but isLoading is false (background refetch in progress)

Task 4 — Add optimistic deletion. When the user clicks Delete:

  • Remove the order from the cached list immediately
  • Fire the delete API call
  • If the call fails, restore the removed order and show an error message
  • If the call succeeds, refetch the list to confirm

Task 5 — Verify deduplication. Render <OrderDetail id={1} /> twice on the same page. Open the Network panel in browser DevTools. Navigate to the page and confirm that only one HTTP request is made, not two.

Quick Reference

TaskfetchAxiosTanStack Query
GET requestfetch(url).then(r => r.json())apiClient.get<T>(url)useQuery({ queryKey, queryFn })
POST requestfetch(url, { method: "POST", body })apiClient.post<T>(url, data)useMutation({ mutationFn })
Check for HTTP errorsif (!response.ok) throw ...AutomaticVia your queryFn
Add auth headerManual per-requestinterceptors.request.use()In your queryFn / Axios interceptor
Retry on failureManualaxios-retry packageretry: 3 in QueryClient defaults
Cache responseManualNoneAutomatic (staleTime)
Loading stateManualManualisLoading, isFetching
Error stateManual try/catchisAxiosError(err)isError, error
Invalidate cacheN/AN/AqueryClient.invalidateQueries()
Set cache directlyN/AN/AqueryClient.setQueryData()
Optimistic updateManualManualonMutate + return context
Rollback on failureManualManualonError(err, vars, context)
Deduplicate requestsManualManualAutomatic
Background refreshManualManualrefetchOnWindowFocus, staleTime
PrefetchManualManualqueryClient.prefetchQuery()
Cancel in-flight requestAbortControllerCancelToken / AbortSignalqueryClient.cancelQueries()

.NET to JS/TS concept map

.NET conceptJS/TS equivalent
HttpClientaxios instance or fetch
IHttpClientFactoryModule-level singleton (Axios instance)
DelegatingHandlerAxios request/response interceptor
EnsureSuccessStatusCode()if (!response.ok) throw / Axios default
ReadFromJsonAsync<T>()response.json() (no runtime validation)
GetFromJsonAsync<T>()apiClient.get<T>(url).then(r => r.data)
Polly retryTanStack Query retry option
Polly circuit breakerNo direct equivalent (use custom interceptor)
IMemoryCacheTanStack Query cache
IDistributedCacheNo direct equivalent in browser context
CancellationTokenAbortController / AbortSignal

Further Reading

Server-Side Rendering and Hydration

For .NET engineers who know: Razor Pages (full server rendering), Blazor WASM (full client rendering), and the fundamentals of how browsers parse HTML You’ll learn: How Next.js and Nuxt blend server and client rendering — and what “hydration” means, why it can go wrong, and how React Server Components change the mental model entirely Time: 15-20 minutes

The .NET Way (What You Already Know)

You already know two rendering models from .NET. Razor Pages is pure server rendering: every request hits the server, the server runs C# code, and the server returns complete HTML. The browser paints it immediately. There is no JavaScript required for the page to be readable. Blazor WASM is pure client rendering: the server sends a near-empty HTML shell and a WebAssembly bundle, the browser downloads and compiles the WASM, and then JavaScript/WASM builds the entire UI in the browser. The page is blank until that process completes.

Razor Pages:
Browser → HTTP request → Server runs C# → Server returns complete HTML → Browser paints ✓

Blazor WASM:
Browser → HTTP request → Server returns <div id="app"></div> + WASM bundle
          → Browser downloads bundle → WASM runs → DOM built in browser ✓

Each has a clear tradeoff:

Razor PagesBlazor WASM
Initial HTMLComplete (SEO-friendly)Empty (SEO-hostile)
Time to first contentFastSlow (bundle download + compile)
InteractivityRequires full page reloadsInstant after load
Server loadHigh (every page is rendered on server)Low (server just serves static files)
Works without JSYesNo

The hybrid model in Next.js and Nuxt is the middle path: server renders complete HTML (like Razor Pages) and then JavaScript in the browser attaches event handlers to that HTML (like Blazor). This process of attaching JavaScript to server-rendered HTML is called hydration.

The Modern JS Way

What SSR Is and Why It Matters

SSR (Server-Side Rendering) in a JavaScript framework means the same JavaScript/TypeScript component code that normally runs in the browser is also executed on the server during the request cycle. The server produces complete HTML, sends it to the browser, the browser displays it immediately, and then downloads the JavaScript bundle to make it interactive.

Next.js SSR request:
Browser → GET /orders/42
Server → runs React components in Node.js → produces complete HTML string
       → sends HTML + <script> tags for JS bundle
Browser → paints HTML immediately (user sees content)
         → downloads JS bundle
         → "hydrates" — attaches React event handlers to the existing DOM
         → page becomes interactive

The benefits over pure client-side rendering (CSR):

  • SEO: search crawlers receive complete HTML without executing JavaScript. Google, Bing, and others can index the full page content.
  • First Contentful Paint (FCP): the browser paints real content immediately, before any JavaScript runs. Users on slow connections see the page rather than a blank screen or a spinner.
  • Core Web Vitals: Largest Contentful Paint (LCP) improves because the main content arrives in the initial HTML, not after a JavaScript fetch.

What Hydration Is

Hydration is the process of taking static server-rendered HTML and making it interactive by attaching React (or Vue) event handlers and state to the existing DOM nodes.

A useful mental model: the server renders a detailed architectural blueprint (HTML). The client receives that blueprint, looks at it, and then installs the plumbing and electricity (event handlers, state). The house exists — it just isn’t functional yet.

// What the server renders (simplified)
const serverHtml = `
<div data-reactroot="">
  <button>
    Clicked 0 times   <!-- server has no concept of click state -->
  </button>
</div>
`;

// What the client-side React code is
function Counter() {
  const [count, setCount] = useState(0);
  return (
    <button onClick={() => setCount(c => c + 1)}>
      Clicked {count} times
    </button>
  );
}

// During hydration:
// 1. React receives the server HTML
// 2. React renders Counter() in JavaScript — gets a virtual DOM representing <button>Clicked 0 times</button>
// 3. React compares this to the real DOM (the server HTML)
// 4. If they match: React attaches onClick to the existing <button> without touching the DOM
// 5. If they don't match: React logs a warning and re-renders (replacing server HTML with client HTML)

Step 4 is what makes hydration efficient: React reuses the server-rendered DOM instead of rebuilding it from scratch. This is why the user sees content immediately — there is no blank page while JavaScript boots.

Hydration Mismatches — the Most Common SSR Bug

A hydration mismatch occurs when the HTML the server renders differs from what the client-side React component would render. React detects the difference and logs a warning, then corrects the DOM. In development, mismatches are prominent errors. In production, they produce a flash of incorrect content.

Common causes:

1. Using Date.now() or Math.random() in render

// WRONG — server renders one timestamp, client renders another timestamp
function ArticleDate() {
  return <time>{new Date().toLocaleDateString()}</time>;
  // Server: "2/18/2026"
  // Client (running slightly later): "2/18/2026" — might match, might not
  // If the locale differs (server is UTC, browser is local): mismatch guaranteed
}

// CORRECT — pass the date as a prop from the server
async function ArticlePage({ params }: { params: { id: string } }) {
  const article = await articleService.getById(Number(params.id));
  return <ArticleDate publishedAt={article.publishedAt} />;
}

function ArticleDate({ publishedAt }: { publishedAt: string }) {
  // publishedAt is an ISO string — deterministic on both server and client
  return <time dateTime={publishedAt}>{new Date(publishedAt).toLocaleDateString("en-US")}</time>;
}

2. Reading browser globals on both server and client

// WRONG — window does not exist on the server
function ThemeToggle() {
  const prefersDark = window.matchMedia("(prefers-color-scheme: dark)").matches;
  // Server: ReferenceError: window is not defined — crashes
}

// CORRECT — gate browser access with typeof window check
function ThemeToggle() {
  const prefersDark =
    typeof window !== "undefined"
      ? window.matchMedia("(prefers-color-scheme: dark)").matches
      : false; // server-side default
  // Server renders false (light mode). Client reads actual preference.
  // If the user prefers dark, there will be a mismatch — handled below with suppressHydrationWarning
}

3. Rendering user-specific data on the server from a shared request context

// WRONG — different users see each other's data if server caches per-render state
function UserGreeting() {
  const user = globalUserStore.current; // shared mutable server state — dangerous
  return <p>Hello, {user.name}</p>;
}

4. The suppressHydrationWarning escape hatch

For cases where a mismatch is expected and harmless (user’s local time, user’s theme preference), React provides suppressHydrationWarning:

// Suppress the mismatch warning for elements whose server/client values are intentionally different
function LocalTime({ utcTime }: { utcTime: string }) {
  return (
    <time suppressHydrationWarning dateTime={utcTime}>
      {new Date(utcTime).toLocaleTimeString()}
      {/* Server: UTC time. Client: local timezone. Different — but that's fine here. */}
    </time>
  );
}

Use this sparingly. It suppresses the warning but does not prevent the DOM replacement — the user still sees a flash of the server value being replaced by the client value.

useEffect Only Runs on the Client

In React, useEffect does not run during server-side rendering. It runs after the component mounts in the browser. This is the correct place for code that depends on browser APIs (window, document, localStorage, navigator):

"use client";
import { useState, useEffect } from "react";

function GeolocationBanner() {
  const [location, setLocation] = useState<string | null>(null);

  // This code ONLY runs in the browser, after hydration
  useEffect(() => {
    navigator.geolocation.getCurrentPosition((pos) => {
      setLocation(`${pos.coords.latitude}, ${pos.coords.longitude}`);
    });
  }, []);

  // Server renders null (no location). Client renders after geolocation resolves.
  // No mismatch — the initial render (null) matches on both sides.
  if (!location) return null;
  return <p>Your location: {location}</p>;
}

The pattern for browser-only state:

"use client";
import { useState, useEffect } from "react";

// Component that is only meaningful in the browser
function WindowSize() {
  const [size, setSize] = useState({ width: 0, height: 0 });

  useEffect(() => {
    const update = () =>
      setSize({ width: window.innerWidth, height: window.innerHeight });
    update(); // set initial value
    window.addEventListener("resize", update);
    return () => window.removeEventListener("resize", update);
  }, []);

  return <p>{size.width} x {size.height}</p>;
}

The initial render returns { width: 0, height: 0 } on both server and client — they match. After hydration, useEffect fires and updates to real values.

React Server Components (RSC)

React Server Components are the architectural shift introduced in Next.js 13’s App Router. The distinction from classic SSR is subtle but important:

  • Classic SSR: components run on the server to generate HTML, and then the same components run again on the client during hydration. The component code ships to the browser.
  • React Server Components: the component code runs only on the server. The HTML it produces is sent to the browser, but the component’s JavaScript is never shipped to the client bundle.
// app/orders/page.tsx — a Server Component (no "use client" directive)
// This component's code never appears in the browser's JavaScript bundle

import { db } from "@/lib/db"; // A database client — never sent to the browser

export default async function OrdersPage() {
  // Direct database access — no API layer needed for this component
  // You can also use secrets here (API keys, connection strings) — never exposed to browser
  const orders = await db.orders.findMany({
    where: { userId: await getCurrentUserId() },
    orderBy: { createdAt: "desc" },
  });

  return (
    <main>
      <h1>Your Orders</h1>
      {/* Passes data to a Client Component as props */}
      <OrderTable orders={orders} />
    </main>
  );
}
// components/OrderTable.tsx — a Client Component (needs interactivity)
"use client";
import { useState } from "react";

interface Props {
  orders: Order[]; // receives serializable data from the Server Component
}

export function OrderTable({ orders }: Props) {
  const [selected, setSelected] = useState<number | null>(null);

  return (
    <table>
      {orders.map((order) => (
        <tr
          key={order.id}
          onClick={() => setSelected(order.id)}
          className={selected === order.id ? "selected" : ""}
        >
          <td>{order.id}</td>
          <td>{order.customerName}</td>
        </tr>
      ))}
    </table>
  );
}

The result: the database query runs on the server, the results are serialized and sent to the browser as props, and only the interactive OrderTable component’s JavaScript is included in the bundle. The OrdersPage code — which may include ORM imports and business logic — never reaches the browser.

Traditional React (CSR or classic SSR):
Browser bundle includes: OrdersPage + db client + ORM + OrderTable + React

React Server Components:
Server-only: OrdersPage + db client + ORM (not shipped)
Browser bundle includes: OrderTable + React only

This is a significant bundle size reduction for data-heavy applications.

The Server/Client boundary rules:

// Server Components CAN:
// - use async/await at the top level
// - import server-only modules (database clients, file system, environment variables)
// - access secrets
// - pass serializable props to Client Components

// Server Components CANNOT:
// - use useState, useReducer, useEffect (no client state)
// - use event handlers (onClick, onChange)
// - use browser APIs (window, document)
// - use Context (React context is client-only)

// Client Components CAN:
// - use all React hooks
// - use browser APIs
// - render Server Components as children (passed as props)

// Client Components CANNOT:
// - import server-only modules (next/server will block this)
// - use async/await at the top level in their render function

Streaming SSR

Traditional SSR sends the complete HTML page only after every component has finished rendering. If one component makes a slow database query, the entire page is delayed. Streaming SSR solves this by sending HTML progressively — fast parts arrive immediately while slow parts stream in as they complete.

// app/orders/[id]/page.tsx — streaming with Suspense
import { Suspense } from "react";

export default function OrderDetailPage({ params }: { params: { id: string } }) {
  return (
    <main>
      {/* This renders immediately — no data required */}
      <h1>Order Details</h1>
      <Breadcrumbs />

      {/* This streams in when the slow query completes */}
      <Suspense fallback={<OrderDetailSkeleton />}>
        <OrderDetail id={Number(params.id)} />
        {/* OrderDetail is async — it fetches data before rendering */}
      </Suspense>

      {/* This streams independently — doesn't wait for OrderDetail */}
      <Suspense fallback={<RelatedOrdersSkeleton />}>
        <RelatedOrders orderId={Number(params.id)} />
      </Suspense>
    </main>
  );
}
// components/OrderDetail.tsx — async Server Component
async function OrderDetail({ id }: { id: number }) {
  const order = await orderService.getById(id); // slow query

  if (!order) notFound();

  return (
    <div>
      <p>Customer: {order.customerName}</p>
      <p>Total: {order.total}</p>
    </div>
  );
}

The browser receives the initial HTML with the heading and the skeleton loaders. As each Suspense boundary resolves (the async component finishes), Next.js streams the resolved HTML as a script tag that replaces the fallback. The user sees progressive content rather than a blank page.

This is closer in concept to ASP.NET’s Response.Flush() or HTTP chunked transfer encoding than to anything else in .NET.

Static Site Generation (SSG) and Incremental Static Regeneration (ISR)

Next.js pages do not have to be server-rendered on every request. Static pages are rendered once at build time and served as files — like deploying a pre-generated Razor view.

// app/blog/[slug]/page.tsx
// generateStaticParams tells Next.js which pages to pre-render at build time
export async function generateStaticParams() {
  const posts = await blogService.getAllPublished();
  return posts.map((post) => ({ slug: post.slug }));
}

// This page is statically generated — HTML file created at build time
export default async function BlogPostPage({ params }: { params: { slug: string } }) {
  const post = await blogService.getBySlug(params.slug);
  return <BlogPost post={post} />;
}

ISR (Incremental Static Regeneration) adds an expiry to static pages:

// app/products/[id]/page.tsx
export const revalidate = 3600; // re-generate this page at most once per hour

// Or, per-request revalidation
export async function generateStaticParams() { ... }

export default async function ProductPage({ params }: { params: { id: string } }) {
  // On first request after the 1-hour window: page is re-generated in the background
  // Subsequent requests during re-generation: get the previous version
  // After re-generation completes: get the new version
  const product = await productService.getById(Number(params.id));
  return <ProductDetail product={product} />;
}

ISR is like a cached [OutputCache] in ASP.NET with a server-side cache invalidation policy — but stateless and edge-deployable.

The Rendering Decision Tree

Is this page public and content-focused (blog, marketing, docs)?
├── Yes → SSG (static generation at build time)
│   └── Does the content change frequently?
│       ├── Yes → ISR (revalidate every N seconds)
│       └── No → Full SSG (revalidate only on redeploy)
│
└── No → Is the content user-specific or requires auth?
    ├── Yes → SSR (render on each request, can access cookies/session)
    │   └── Does it have slow data sections?
    │       └── Yes → SSR + Streaming with Suspense
    │
    └── No (real-time, highly interactive, low SEO need)
        └── CSR (Client-Side Rendering with TanStack Query)

Translated to Next.js / Nuxt config:

Rendering modeHow to configure
SSG (build time)generateStaticParams() + no revalidate export
ISRexport const revalidate = 60 (seconds)
SSR (per request)export const dynamic = "force-dynamic"
CSR (skip SSR)"use client" + no async data fetching in the component
Streaming SSR<Suspense> boundaries around async Server Components

Key Differences

ConceptRazor PagesBlazor WASMNext.js SSRNext.js RSC
Where rendering happensServerClientServer + ClientServer (components) + Client (interactive parts)
Initial HTMLCompleteEmpty shellCompleteComplete
JavaScript required for contentNoYesNoNo
Interactive without JSYes (forms need round trips)NoNoNo
Component code in browser bundleN/AYes (WASM)YesOnly Client Components
Direct DB access in componentN/ANo (needs API)No (use API or server action)Yes
Secrets in componentYesNoNo (unless using server actions)Yes
HydrationN/AN/ARequiredRequired

Gotchas for .NET Engineers

Gotcha 1: “window is not defined” — the most common SSR error

When your component code runs on the server, Node.js does not have browser globals. window, document, navigator, localStorage, sessionStorage, and location all throw ReferenceError. This crashes the server render.

// WRONG — crashes the server
const theme = localStorage.getItem("theme"); // at module level — runs on server

// WRONG — crashes the server (no useEffect guard)
"use client";
function ThemeConsumer() {
  const theme = localStorage.getItem("theme"); // in render — runs during SSR
  return <div className={theme ?? "light"}>{...}</div>;
}

// CORRECT — typeof guard for code outside components
const isBrowser = typeof window !== "undefined";
const theme = isBrowser ? localStorage.getItem("theme") : null;

// CORRECT — useEffect for component code (only runs in browser)
"use client";
import { useState, useEffect } from "react";

function ThemeConsumer() {
  const [theme, setTheme] = useState<string | null>(null); // server and initial client: null

  useEffect(() => {
    setTheme(localStorage.getItem("theme")); // only runs in browser, after hydration
  }, []);

  return <div className={theme ?? "light"}>{...}</div>;
}

If you have an entire component that only makes sense in the browser (maps, rich text editors, WebGL), use Next.js’s dynamic import with ssr: false:

import dynamic from "next/dynamic";

// MapComponent is never rendered on the server
const MapComponent = dynamic(() => import("@/components/MapComponent"), {
  ssr: false,
  loading: () => <div>Loading map...</div>,
});

Gotcha 2: Hydration mismatches from localStorage / session-dependent rendering

The most insidious hydration mismatch pattern: the server renders the logged-out state, the client reads the auth token from localStorage and renders the logged-in state. The mismatch error fires, and React replaces the server HTML with the client HTML — the user sees a flash.

// WRONG — server renders "Log in", client renders "Hello Chris" — mismatch
"use client";
function NavBar() {
  const user = JSON.parse(localStorage.getItem("user") ?? "null");
  return <nav>{user ? `Hello ${user.name}` : "Log in"}</nav>;
}

// CORRECT — defer user-specific rendering to after hydration
"use client";
import { useState, useEffect } from "react";

function NavBar() {
  const [user, setUser] = useState<User | null>(null);

  useEffect(() => {
    const stored = localStorage.getItem("user");
    if (stored) setUser(JSON.parse(stored));
  }, []);

  // Server and initial client both render: "Log in" — no mismatch
  // After hydration: useEffect fires and sets user — client re-renders correctly
  return <nav>{user ? `Hello ${user.name}` : "Log in"}</nav>;
}

// BETTER — store auth in cookies (readable on the server)
// Then server and client agree on the auth state from the start

Using cookies for auth state instead of localStorage eliminates this class of bug. The server can read request.cookies and render the correct state from the start.

Gotcha 3: Server Components cannot be made interactive

A common mistake when first using the App Router: adding an onClick or useState to a Server Component and being confused by the error message. The fix is always to extract the interactive part into a separate Client Component.

// WRONG — Server Component trying to be interactive
// app/orders/page.tsx (no "use client")
export default async function OrdersPage() {
  const orders = await db.orders.findMany();

  return (
    <main>
      {orders.map((order) => (
        // ERROR: Event handlers cannot be passed to Client Component props
        <div key={order.id} onClick={() => console.log(order.id)}>
          {order.customerName}
        </div>
      ))}
    </main>
  );
}

// CORRECT — extract the interactive part
// app/orders/page.tsx
export default async function OrdersPage() {
  const orders = await db.orders.findMany();
  return <OrderList orders={orders} />; // pass data as props
}

// components/OrderList.tsx
"use client"; // interactive behavior lives here
export function OrderList({ orders }: { orders: Order[] }) {
  return (
    <main>
      {orders.map((order) => (
        <div key={order.id} onClick={() => console.log(order.id)}>
          {order.customerName}
        </div>
      ))}
    </main>
  );
}

The split follows the same principle as ASP.NET Blazor’s Server/WASM split: put the data-fetching and business logic on the server, put the interactivity on the client.

Gotcha 4: Props passed from Server Components to Client Components must be serializable

Server Components pass data to Client Components via props. These props are serialized to JSON before being sent to the browser (they travel over the network as part of the RSC payload). Non-serializable values cannot be passed as props.

// WRONG — functions, class instances, and Promises are not serializable
export default async function OrdersPage() {
  const orders = await db.orders.findMany();

  return (
    <OrderList
      orders={orders}
      onDelete={(id) => deleteOrder(id)}  // ERROR: functions not serializable
      dateFormatter={new Intl.DateTimeFormat("en-US")} // ERROR: class instance
    />
  );
}

// CORRECT — pass only serializable data (primitives, plain objects, arrays)
export default async function OrdersPage() {
  const orders = await db.orders.findMany();

  return (
    <OrderList
      orders={orders}
      // The Client Component handles its own event handlers
      // The Client Component formats dates using its own Intl instances
    />
  );
}

Serializable types: string, number, boolean, null, undefined, plain objects ({}), arrays, and instances of Date (serialized as ISO strings). Not serializable: functions, class instances with methods, Map, Set, Symbol, BigInt.

Gotcha 5: SSG pages with dynamic data require explicit cache invalidation

SSG pages are generated at build time and served from a CDN. When the underlying data changes — a product price update, a blog post edit — the static page does not automatically update. You must either:

  1. Trigger a redeploy (rebuilds all static pages)
  2. Use ISR (revalidation window determines how stale the page can be)
  3. Use on-demand revalidation (call revalidatePath() from a webhook or server action)
// app/api/webhooks/cms/route.ts — on-demand ISR revalidation
import { revalidatePath } from "next/cache";
import { NextRequest, NextResponse } from "next/server";

export async function POST(request: NextRequest) {
  const body = await request.json();

  // Verify the webhook signature (never skip this in production)
  const signature = request.headers.get("x-webhook-signature");
  if (!verifySignature(signature, body)) {
    return NextResponse.json({ error: "Invalid signature" }, { status: 401 });
  }

  // Revalidate the specific page that changed
  if (body.type === "post.published") {
    revalidatePath(`/blog/${body.slug}`);
    revalidatePath("/blog"); // also invalidate the index
  }

  return NextResponse.json({ revalidated: true });
}

This is the JS equivalent of Response.RemoveOutputCacheItem() in ASP.NET — purge a specific cached URL on demand rather than waiting for expiry.

Hands-On Exercise

Build a product catalog page that demonstrates each rendering mode.

Setup: npx create-next-app@latest --typescript product-catalog

Task 1 — SSG product list page. Create app/products/page.tsx as a Server Component that:

  • Fetches all products from https://fakestoreapi.com/products using fetch() with { next: { revalidate: 3600 } } (ISR: revalidate hourly)
  • Renders a grid of product cards
  • Uses generateStaticParams equivalent behavior — check that the page is statically generated by running npm run build and inspecting the output (pages marked with are static)

Task 2 — SSR product detail page. Create app/products/[id]/page.tsx that:

  • Fetches a specific product: https://fakestoreapi.com/products/{id}
  • Uses notFound() if the product ID is invalid
  • Uses streaming: wrap the “Related Products” section (fetched from /products/category/{category}) in a <Suspense> boundary with a skeleton fallback
  • Force the page to SSR with export const dynamic = "force-dynamic" and verify in DevTools that the page arrives as complete HTML before any JavaScript runs

Task 3 — Demonstrate hydration. Create a Client Component ProductReviewForm that:

  • Initially renders with no values (matches server)
  • Uses useEffect to load a draft review from localStorage after hydration
  • Observe in DevTools that the form renders twice: once with empty state (SSR), once with the draft (after hydration)

Task 4 — Trigger a mismatch intentionally. In the product detail page, add a component that renders Date.now() directly (without suppressHydrationWarning). Run the dev server and observe the hydration warning in the console. Then fix it using the patterns from the Gotchas section.

Task 5 — RSC boundary. Refactor the product list so that:

  • app/products/page.tsx (Server Component) fetches data and passes it to <ProductGrid products={products} />
  • components/ProductGrid.tsx (Client Component) handles filter state and sorting
  • Verify that no fetch code appears in the browser’s JavaScript bundle (Network tab → JS files — the fetch() call to the API should not appear)

Quick Reference

Rendering modeConfigured byData freshnessSEOBest for
SSGgenerateStaticParams()Build timeExcellentBlogs, docs, marketing
ISRexport const revalidate = NUp to N seconds staleExcellentCatalogs, semi-static content
On-demand ISRrevalidatePath() / revalidateTag()Immediately on webhookExcellentCMS-driven content
SSRexport const dynamic = "force-dynamic"Always freshExcellentAuthenticated pages, real-time
Streaming SSR<Suspense> boundariesAlways freshExcellentPages with mixed-speed data
CSR"use client" + client fetchingTanStack Query managedPoor (unless pre-rendered shell)Dashboards, admin, real-time

Key terms

TermDefinition.NET analog
SSRServer renders complete HTML per requestRazor Pages
CSRBrowser renders everything from JSBlazor WASM
SSGHTML generated at build time, served as static filePre-compiled Razor views cached indefinitely
ISRSSG pages regenerated periodically[OutputCache] with sliding expiration
HydrationAttaching React event handlers to server-rendered HTMLBlazor server pre-rendering + reconnect
RSCComponent runs only on server, code not shipped to browserCode-behind that never reaches the client
StreamingProgressive HTML delivery via HTTP chunked transferResponse.Flush() mid-render
Hydration mismatchServer and client render different HTMLInconsistent ViewState
"use client"Opts a component into client-side execution@rendermode InteractiveWebAssembly in Blazor
SuspenseBoundary that shows a fallback while async children loadLoading placeholder + conditional rendering
revalidatePathPurge specific pages from the static cacheResponse.RemoveOutputCacheItem()

Common errors and fixes

ErrorCauseFix
ReferenceError: window is not definedBrowser global accessed during SSRUse typeof window !== "undefined" guard or useEffect
ReferenceError: localStorage is not definedlocalStorage accessed during SSRMove to useEffect or use cookies
Hydration warning in consoleServer and client render different HTMLCheck for Date.now(), browser globals, or user-specific data in render
Event handlers cannot be passed to Client Component propsAdding onClick to a Server ComponentExtract to a "use client" component
Props are not serializablePassing functions or class instances from Server to ClientPass only plain data; handle functions inside Client Component
SSG page not updating after data changeNo revalidation configuredAdd revalidate export or use revalidatePath() webhook
useEffect not runningComponent is a Server Component (no "use client")Add "use client" directive at top of file

Further Reading

NestJS Architecture: Modules, Controllers, Services

For .NET engineers who know: ASP.NET Core — Program.cs, IServiceCollection, controllers, services, constructor injection, and the request pipeline You’ll learn: How NestJS maps one-to-one with ASP.NET Core’s architecture, and how to build a complete CRUD API using the same mental model you already have Time: 15-20 min read

The .NET Way (What You Already Know)

An ASP.NET Core application has a fixed architecture that you probably navigate on autopilot. Program.cs wires everything together: you call builder.Services.Add*() to register services, app.Use*() to add middleware, and the framework resolves the dependency graph at startup. Controllers are classes decorated with [ApiController] and [Route], action methods are decorated with [HttpGet], [HttpPost], etc. Services are plain classes registered with AddScoped, AddSingleton, or AddTransient, and injected via constructors.

// Program.cs — the whole startup in one place
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();
builder.Services.AddScoped<IOrderService, OrderService>();
builder.Services.AddSingleton<ICacheService, RedisCacheService>();

var app = builder.Build();
app.UseAuthentication();
app.UseAuthorization();
app.MapControllers();
app.Run();
// OrdersController.cs
[ApiController]
[Route("api/[controller]")]
public class OrdersController : ControllerBase
{
    private readonly IOrderService _orderService;

    public OrdersController(IOrderService orderService)
    {
        _orderService = orderService;
    }

    [HttpGet("{id}")]
    public async Task<ActionResult<OrderDto>> GetOrder(int id)
    {
        var order = await _orderService.GetByIdAsync(id);
        return order is null ? NotFound() : Ok(order);
    }
}
// OrderService.cs
public class OrderService : IOrderService
{
    private readonly AppDbContext _db;

    public OrderService(AppDbContext db)
    {
        _db = db;
    }

    public async Task<Order?> GetByIdAsync(int id) =>
        await _db.Orders.FindAsync(id);
}

The pattern is: register in IServiceCollection, inject via constructor, decorate with attributes. NestJS does exactly this, with decorators instead of attributes and modules instead of IServiceCollection.

The NestJS Way

Creating a Project

# Install the NestJS CLI globally
pnpm add -g @nestjs/cli

# Create a new project (equivalent to: dotnet new webapi -n MyApi)
nest new my-api
cd my-api

# The CLI asks for a package manager — choose pnpm
# It scaffolds the project and installs dependencies

# Run in development mode with hot reload (equivalent to: dotnet watch run)
pnpm run start:dev

The generated project structure:

my-api/
├── src/
│   ├── app.controller.ts      # Root controller (can delete)
│   ├── app.module.ts          # Root module — equivalent to Program.cs
│   ├── app.service.ts         # Root service (can delete)
│   └── main.ts                # Entry point — equivalent to Program.cs bootstrap
├── test/
├── nest-cli.json              # NestJS CLI config
├── tsconfig.json
└── package.json
// main.ts — Entry point. Equivalent to the top of Program.cs.
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  app.setGlobalPrefix('api');        // Like: app.MapControllers() with a route prefix
  await app.listen(3000);
}
bootstrap();

Modules — The IServiceCollection Equivalent

In ASP.NET Core, IServiceCollection is a flat list. You register everything in one place and the DI container handles resolution globally.

NestJS uses modules instead. Each module is a class decorated with @Module() that explicitly declares:

  • controllers — what controllers belong to this module
  • providers — what services are available within this module
  • imports — what other modules this module depends on
  • exports — what providers this module makes available to other modules

This is stricter than ASP.NET Core. In .NET, any registered service is globally available to any other service. In NestJS, a service in OrdersModule cannot be injected into UsersModule unless OrdersModule explicitly exports it.

// app.module.ts — Root module. Equivalent to builder.Services.Add*() in Program.cs.
import { Module } from '@nestjs/common';
import { OrdersModule } from './orders/orders.module';
import { UsersModule } from './users/users.module';

@Module({
  imports: [
    OrdersModule,   // Like: builder.Services.AddModule<OrdersModule>() (hypothetical)
    UsersModule,
  ],
})
export class AppModule {}
// orders/orders.module.ts
import { Module } from '@nestjs/common';
import { OrdersController } from './orders.controller';
import { OrdersService } from './orders.service';

@Module({
  controllers: [OrdersController],   // Registers this controller's routes
  providers: [OrdersService],        // Equivalent to: builder.Services.AddScoped<OrdersService>()
  exports: [OrdersService],          // Makes OrdersService injectable in other modules
})
export class OrdersModule {}

The exports array is the key difference from ASP.NET Core. If you forget to export a provider, you get a clean error at startup: Nest can't resolve dependencies of the XyzService. This is actually nicer than .NET’s equivalent, which is a runtime InvalidOperationException when the first request hits.

Controllers — Exactly Like ASP.NET Core

The mapping is direct enough that you can read NestJS controllers on your first day:

// orders/orders.controller.ts
import {
  Controller,
  Get,
  Post,
  Put,
  Delete,
  Body,
  Param,
  ParseIntPipe,
  HttpCode,
  HttpStatus,
} from '@nestjs/common';
import { OrdersService } from './orders.service';
import { CreateOrderDto } from './dto/create-order.dto';
import { UpdateOrderDto } from './dto/update-order.dto';

// @Controller('orders') = [Route("orders")] on the class
// Combined with app.setGlobalPrefix('api'), routes are: /api/orders
@Controller('orders')
export class OrdersController {
  // Constructor injection — identical to ASP.NET Core
  constructor(private readonly ordersService: OrdersService) {}

  // @Get() = [HttpGet]
  // No route segment = GET /api/orders
  @Get()
  findAll() {
    return this.ordersService.findAll();
  }

  // @Get(':id') = [HttpGet("{id}")]
  // @Param('id') = the {id} route segment
  // ParseIntPipe = equivalent to the automatic int conversion in model binding
  @Get(':id')
  findOne(@Param('id', ParseIntPipe) id: number) {
    return this.ordersService.findOne(id);
  }

  // @Post() = [HttpPost]
  // @Body() = [FromBody] — deserializes the request body
  @Post()
  create(@Body() createOrderDto: CreateOrderDto) {
    return this.ordersService.create(createOrderDto);
  }

  // @Put(':id') = [HttpPut("{id}")]
  @Put(':id')
  update(
    @Param('id', ParseIntPipe) id: number,
    @Body() updateOrderDto: UpdateOrderDto,
  ) {
    return this.ordersService.update(id, updateOrderDto);
  }

  // @Delete(':id') = [HttpDelete("{id}")]
  // @HttpCode() = [ProducesResponseType(204)] — sets the success status code
  @Delete(':id')
  @HttpCode(HttpStatus.NO_CONTENT)
  remove(@Param('id', ParseIntPipe) id: number) {
    return this.ordersService.remove(id);
  }
}

Decorator-to-attribute mapping:

ASP.NET CoreNestJSNotes
[ApiController] + [Route("orders")]@Controller('orders')Combined into one decorator
[HttpGet]@Get()
[HttpGet("{id}")]@Get(':id')NestJS uses :param syntax
[HttpPost]@Post()
[HttpPut("{id}")]@Put(':id')
[HttpDelete("{id}")]@Delete(':id')
[FromBody]@Body()Parameter decorator
[FromRoute] / route parameter@Param('name')Parameter decorator
[FromQuery]@Query('name')Parameter decorator
[FromHeader]@Headers('name')Parameter decorator
IActionResult / ActionResult<T>Return T directlyNestJS serializes the return value
Ok(result)Return the value200 is the default for @Get, @Put
Created(result)Return the value with @HttpCode(201)Or use @Post() which defaults to 201
NoContent()@HttpCode(204)
NotFound()throw new NotFoundException()Covered in the Gotchas section

Services — @Injectable() is AddScoped()

A NestJS service is a plain TypeScript class with the @Injectable() decorator. The decorator marks it as a participant in the DI system — analogous to what AddScoped<T>() does in .NET (with singleton as the default, which we’ll cover in a moment).

// orders/orders.service.ts
import { Injectable, NotFoundException } from '@nestjs/common';
import { CreateOrderDto } from './dto/create-order.dto';
import { UpdateOrderDto } from './dto/update-order.dto';

// @Injectable() = registration token for the DI container
// The service must ALSO be listed in a module's providers array to actually be registered
@Injectable()
export class OrdersService {
  // In a real app, inject PrismaService here instead of in-memory storage
  private readonly orders: Order[] = [];
  private nextId = 1;

  findAll(): Order[] {
    return this.orders;
  }

  findOne(id: number): Order {
    const order = this.orders.find((o) => o.id === id);
    // Equivalent to: return NotFound() in the controller
    // NestJS exception filters handle this and return a 404 response
    if (!order) {
      throw new NotFoundException(`Order #${id} not found`);
    }
    return order;
  }

  create(dto: CreateOrderDto): Order {
    const order: Order = { id: this.nextId++, ...dto };
    this.orders.push(order);
    return order;
  }

  update(id: number, dto: UpdateOrderDto): Order {
    const index = this.orders.findIndex((o) => o.id === id);
    if (index === -1) {
      throw new NotFoundException(`Order #${id} not found`);
    }
    this.orders[index] = { ...this.orders[index], ...dto };
    return this.orders[index];
  }

  remove(id: number): void {
    const index = this.orders.findIndex((o) => o.id === id);
    if (index === -1) {
      throw new NotFoundException(`Order #${id} not found`);
    }
    this.orders.splice(index, 1);
  }
}

The NotFoundException class (and its siblings BadRequestException, ConflictException, etc.) are NestJS’s equivalent of returning NotFound(), BadRequest(), etc. from a controller. Because services don’t have access to the Response object, throwing exceptions is the right pattern — NestJS’s built-in exception filter catches them and converts them to appropriate HTTP responses.

DTOs

DTOs in NestJS are plain TypeScript classes. They look like C# DTO classes but without attributes — validation is applied separately via pipes (covered in Article 4.2).

// orders/dto/create-order.dto.ts
export class CreateOrderDto {
  customerId: number;
  items: Array<{ productId: number; quantity: number }>;
  notes?: string;
}

// orders/dto/update-order.dto.ts
// PartialType makes all properties optional — equivalent to your UpdateDto in C#
// where all fields are nullable/optional
import { PartialType } from '@nestjs/common';
import { CreateOrderDto } from './create-order.dto';

export class UpdateOrderDto extends PartialType(CreateOrderDto) {}

Dependency Injection Between Modules

Here’s a concrete example of cross-module injection:

// notifications/notifications.module.ts
import { Module } from '@nestjs/common';
import { NotificationsService } from './notifications.service';

@Module({
  providers: [NotificationsService],
  exports: [NotificationsService],  // Without this, other modules can't inject it
})
export class NotificationsModule {}

// orders/orders.module.ts — importing NotificationsModule
import { Module } from '@nestjs/common';
import { NotificationsModule } from '../notifications/notifications.module';
import { OrdersController } from './orders.controller';
import { OrdersService } from './orders.service';

@Module({
  imports: [NotificationsModule],      // Import the module to access its exports
  controllers: [OrdersController],
  providers: [OrdersService],
})
export class OrdersModule {}

// orders/orders.service.ts — injecting NotificationsService
@Injectable()
export class OrdersService {
  constructor(
    private readonly notificationsService: NotificationsService,  // Resolves from NotificationsModule
  ) {}
}

In .NET terms, importing a module is like calling builder.Services.Add<NotificationsModule>() where the module registers its own services, and exporting a service is like making it public vs. internal.

Provider Scopes

In ASP.NET Core, you choose AddSingleton, AddScoped, or AddTransient. NestJS has the same three scopes with different names:

ASP.NET CoreNestJSBehavior
AddSingleton<T>()Scope.DEFAULT (the default)One instance for the entire application lifetime
AddScoped<T>()Scope.REQUESTOne instance per HTTP request
AddTransient<T>()Scope.TRANSIENTNew instance every time it’s injected

Default in NestJS is Scope.DEFAULT (singleton). Default in ASP.NET Core is… whatever you pick, but the ASP.NET convention is AddScoped for most services.

import { Injectable, Scope } from '@nestjs/common';

// Singleton (the default — same as AddSingleton in .NET)
@Injectable()
export class CacheService {}

// Request-scoped (same as AddScoped in .NET)
@Injectable({ scope: Scope.REQUEST })
export class RequestContextService {}

// Transient (same as AddTransient in .NET)
@Injectable({ scope: Scope.TRANSIENT })
export class LoggerService {}

The important implication: because the default scope is singleton, services should be stateless by default (just like singleton services in .NET). If you need per-request state, use Scope.REQUEST. Note that request-scoped providers cause a performance overhead because NestJS has to create new instances on each request and cannot cache the resolved dependency graph — the same trade-off as in ASP.NET Core.

The Request Lifecycle

In ASP.NET Core:

graph TD
    subgraph aspnet["ASP.NET Core Request Lifecycle"]
        A1["Request"]
        A2["Middleware pipeline\n(UseAuthentication, UseAuthorization, etc.)"]
        A3["Controller action method"]
        A4["Action filters (before/after)"]
        A5["Model binding"]
        A6["Response"]
        A1 --> A2 --> A3 --> A4 --> A5 --> A6
    end

In NestJS:

graph TD
    subgraph nestjs["NestJS Request Lifecycle"]
        N1["Request"]
        N2["Middleware (global → module-level)"]
        N3["Guards (global → controller → handler)"]
        N4["Interceptors — before (global → controller → handler)"]
        N5["Pipes (global → controller → handler)"]
        N6["Controller handler method"]
        N7["Interceptors — after (handler → controller → global)"]
        N8["Exception filters (if an exception was thrown)"]
        N9["Response"]
        N1 --> N2 --> N3 --> N4 --> N5 --> N6 --> N7 --> N8 --> N9
    end

This is covered in depth in Article 4.2. For now, the key insight is that the same concept (things that run before/after your controller logic) exists in both frameworks under different names.

Building a Complete CRUD API

Let’s build a complete, runnable Orders API that maps directly to what you’d write in ASP.NET Core. This uses Prisma for the database layer — if you don’t have it set up yet, the service can use an in-memory array and you can swap it later.

Step 1: Generate the Module

# The NestJS CLI generates the full set of files with correct wiring
# Equivalent to using Visual Studio's "Add Controller" scaffolding
nest generate module orders
nest generate controller orders
nest generate service orders

# Or all at once with a resource (generates CRUD boilerplate):
nest generate resource orders
# The CLI asks if you want REST API or GraphQL, and whether to generate CRUD entry points

Step 2: Define the Data Types

// orders/order.entity.ts
// In a real project this would be your Prisma-generated type.
// "Entity" is the NestJS convention for the domain model class.
export class Order {
  id: number;
  customerId: number;
  status: 'pending' | 'confirmed' | 'shipped' | 'delivered' | 'cancelled';
  totalCents: number;
  createdAt: Date;
  updatedAt: Date;
}
// orders/dto/create-order.dto.ts
export class CreateOrderDto {
  customerId: number;
  items: Array<{
    productId: number;
    quantity: number;
    unitPriceCents: number;
  }>;
}

// orders/dto/update-order.dto.ts
import { PartialType } from '@nestjs/common';
import { CreateOrderDto } from './create-order.dto';

export class UpdateOrderDto extends PartialType(CreateOrderDto) {}

Step 3: The Service

// orders/orders.service.ts
import { Injectable, NotFoundException } from '@nestjs/common';
import { Order } from './order.entity';
import { CreateOrderDto } from './dto/create-order.dto';
import { UpdateOrderDto } from './dto/update-order.dto';

@Injectable()
export class OrdersService {
  // In production: inject PrismaService and use this.prisma.order.*
  private orders: Order[] = [];
  private nextId = 1;

  async findAll(): Promise<Order[]> {
    // Prisma equivalent: return this.prisma.order.findMany();
    return this.orders;
  }

  async findOne(id: number): Promise<Order> {
    // Prisma equivalent: const order = await this.prisma.order.findUnique({ where: { id } });
    const order = this.orders.find((o) => o.id === id);
    if (!order) {
      // NestJS translates this to a 404 response with a standard error body
      throw new NotFoundException(`Order #${id} not found`);
    }
    return order;
  }

  async create(dto: CreateOrderDto): Promise<Order> {
    const totalCents = dto.items.reduce(
      (sum, item) => sum + item.quantity * item.unitPriceCents,
      0,
    );
    const order: Order = {
      id: this.nextId++,
      customerId: dto.customerId,
      status: 'pending',
      totalCents,
      createdAt: new Date(),
      updatedAt: new Date(),
    };
    // Prisma equivalent: return this.prisma.order.create({ data: { ... } });
    this.orders.push(order);
    return order;
  }

  async update(id: number, dto: UpdateOrderDto): Promise<Order> {
    const index = this.orders.findIndex((o) => o.id === id);
    if (index === -1) {
      throw new NotFoundException(`Order #${id} not found`);
    }
    this.orders[index] = {
      ...this.orders[index],
      ...dto,
      updatedAt: new Date(),
    };
    // Prisma equivalent: return this.prisma.order.update({ where: { id }, data: dto });
    return this.orders[index];
  }

  async remove(id: number): Promise<void> {
    const index = this.orders.findIndex((o) => o.id === id);
    if (index === -1) {
      throw new NotFoundException(`Order #${id} not found`);
    }
    // Prisma equivalent: await this.prisma.order.delete({ where: { id } });
    this.orders.splice(index, 1);
  }
}

Step 4: The Controller

// orders/orders.controller.ts
import {
  Controller,
  Get,
  Post,
  Put,
  Delete,
  Body,
  Param,
  ParseIntPipe,
  HttpCode,
  HttpStatus,
} from '@nestjs/common';
import { OrdersService } from './orders.service';
import { CreateOrderDto } from './dto/create-order.dto';
import { UpdateOrderDto } from './dto/update-order.dto';
import { Order } from './order.entity';

@Controller('orders')  // Routes: /api/orders (with global prefix)
export class OrdersController {
  constructor(private readonly ordersService: OrdersService) {}

  // GET /api/orders
  @Get()
  findAll(): Promise<Order[]> {
    return this.ordersService.findAll();
  }

  // GET /api/orders/:id
  // ParseIntPipe converts the ':id' string parameter to a number
  // Throws a 400 if ':id' is not a valid integer
  @Get(':id')
  findOne(@Param('id', ParseIntPipe) id: number): Promise<Order> {
    return this.ordersService.findOne(id);
  }

  // POST /api/orders — returns 201 Created by default
  @Post()
  create(@Body() createOrderDto: CreateOrderDto): Promise<Order> {
    return this.ordersService.create(createOrderDto);
  }

  // PUT /api/orders/:id
  @Put(':id')
  update(
    @Param('id', ParseIntPipe) id: number,
    @Body() updateOrderDto: UpdateOrderDto,
  ): Promise<Order> {
    return this.ordersService.update(id, updateOrderDto);
  }

  // DELETE /api/orders/:id — returns 204 No Content
  @Delete(':id')
  @HttpCode(HttpStatus.NO_CONTENT)
  remove(@Param('id', ParseIntPipe) id: number): Promise<void> {
    return this.ordersService.remove(id);
  }
}

Step 5: Wire Up the Module

// orders/orders.module.ts
import { Module } from '@nestjs/common';
import { OrdersController } from './orders.controller';
import { OrdersService } from './orders.service';

@Module({
  controllers: [OrdersController],
  providers: [OrdersService],
  exports: [OrdersService],  // Export if other modules need to inject OrdersService
})
export class OrdersModule {}
// app.module.ts — Register the feature module
import { Module } from '@nestjs/common';
import { OrdersModule } from './orders/orders.module';

@Module({
  imports: [OrdersModule],
})
export class AppModule {}

Step 6: Test It

pnpm run start:dev

# In another terminal:
curl -X POST http://localhost:3000/api/orders \
  -H "Content-Type: application/json" \
  -d '{"customerId": 1, "items": [{"productId": 10, "quantity": 2, "unitPriceCents": 1999}]}'
# Returns: {"id":1,"customerId":1,"status":"pending","totalCents":3998,"createdAt":"...","updatedAt":"..."}

curl http://localhost:3000/api/orders/1
# Returns the order

curl http://localhost:3000/api/orders/999
# Returns: {"statusCode":404,"message":"Order #999 not found","error":"Not Found"}

The 404 response format is NestJS’s built-in exception filter — the same JSON structure every NestJS app returns by default. In ASP.NET Core, this is the ProblemDetails format enabled by [ApiController].

Key Differences

ConcernASP.NET CoreNestJSNotes
Service registration locationProgram.cs (one place)Each @Module() (distributed)NestJS modules co-locate registration with the feature
Default service scopeYou choose per registrationDEFAULT (singleton)NestJS defaults to singleton; request scope is opt-in
Service visibilityGlobal (any registered service is injectable everywhere)Scoped to module; must export to shareNestJS is stricter — good for large codebases
Returning errors from controllersreturn NotFound()throw new NotFoundException()NestJS services throw; filters convert to HTTP responses
Route definition[Route] + [HttpGet] on methods@Controller() + @Get() on methodsFunctionally identical
Parameter binding[FromBody], [FromQuery], etc.@Body(), @Query(), etc.Same concept, parameter decorators vs. parameter attributes
Interface extraction for servicesCommon pattern (IOrderService)Rarely used — inject the concrete classTypeScript’s structural typing makes interfaces less necessary
Startup validationResolved lazily at first requestResolved eagerly at startupNestJS validates the entire DI graph before accepting requests

The interface point deserves explanation. In C#, you extract an interface (IOrderService) primarily for two reasons: to enable mocking in tests, and to allow DI to swap implementations. In TypeScript/NestJS, you can mock a class directly (structural typing means any object with matching methods works), and swapping implementations can be done via module configuration. Most NestJS codebases inject concrete classes, not interfaces.

Gotchas for .NET Engineers

Gotcha 1: Forgetting to List Providers in the Module

@Injectable() is not enough on its own. The service must also appear in the providers array of a module, or NestJS will refuse to inject it. The error message is specific but confusing at first:

Nest can't resolve dependencies of the OrdersController (?).
Please make sure that the argument OrdersService at index [0]
is available in the OrdersModule context.

This means you added @Injectable() to OrdersService but did not add it to OrdersModule.providers.

// WRONG — @Injectable() alone is not enough
@Injectable()
export class OrdersService { /* ... */ }

@Module({
  controllers: [OrdersController],
  // providers: [OrdersService]  ← forgot this
})
export class OrdersModule {}
// CORRECT
@Module({
  controllers: [OrdersController],
  providers: [OrdersService],    // Must be here
})
export class OrdersModule {}

In ASP.NET Core, there’s no equivalent mistake — builder.Services.AddScoped<OrderService>() is the only registration step. NestJS splits registration into two steps: the decorator on the class, and the listing in the module.

Gotcha 2: Forgetting to Export Services That Other Modules Need

If OrdersModule needs to inject NotificationsService, and you import NotificationsModule but NotificationsService is not in NotificationsModule.exports, you get another confusing error. The service exists and is registered — it’s just not visible outside its module.

// NotificationsModule — service is registered but not exported
@Module({
  providers: [NotificationsService],
  // exports: [NotificationsService]  ← forgot this
})
export class NotificationsModule {}

// OrdersModule — imports the module, but can't see NotificationsService
@Module({
  imports: [NotificationsModule],
  providers: [OrdersService],
  controllers: [OrdersController],
})
export class OrdersModule {}

// This injection fails at startup with:
// "Nest can't resolve dependencies of the OrdersService"
@Injectable()
export class OrdersService {
  constructor(private readonly notificationsService: NotificationsService) {} // ERROR
}

The fix is to add NotificationsService to NotificationsModule.exports. In ASP.NET Core, there is no equivalent — every registered service is globally available. The NestJS module system is more explicit, which is genuinely better for large codebases, but requires getting used to.

Gotcha 3: Circular Module Dependencies

You’ll eventually create a situation where OrdersModule imports UsersModule and UsersModule imports OrdersModule. NestJS detects this at startup and throws:

Error: A circular dependency has been detected (OrdersModule -> UsersModule -> OrdersModule).
Please, make sure that each side of a bidirectional relationships are using forwardRef().

The fix is forwardRef():

// orders/orders.module.ts
import { Module, forwardRef } from '@nestjs/common';
import { UsersModule } from '../users/users.module';

@Module({
  imports: [forwardRef(() => UsersModule)],  // Deferred reference
  providers: [OrdersService],
  exports: [OrdersService],
})
export class OrdersModule {}

// users/users.module.ts
@Module({
  imports: [forwardRef(() => OrdersModule)],
  providers: [UsersService],
  exports: [UsersService],
})
export class UsersModule {}

But before reaching for forwardRef(), reconsider the design. Circular dependencies are usually a sign that two modules are too tightly coupled. The better solution is usually to extract the shared logic into a third module (SharedModule or a more specific domain module) that both can import without creating a cycle. This is the same advice you’d give in ASP.NET Core — circular service dependencies are an architecture smell.

Gotcha 4: NestJS Decorators Are Not ASP.NET Attributes

They look the same. They are not the same thing at all.

ASP.NET attributes are metadata — the CLR reads them via reflection at runtime. The attribute class has no impact on how your code executes unless something explicitly reads it.

TypeScript decorators are functions that execute when the class is defined. @Controller('orders') runs immediately when the JavaScript module is loaded, calling the NestJS framework function that registers the class as a controller. The decorator has side effects. If you apply a decorator to the wrong thing, you get an error at class definition time, not at request time.

This matters in one practical way: decorators cannot be conditional at runtime (unlike attributes, which can be applied based on runtime configuration in some patterns). Decorators are determined at class definition time and cannot be changed dynamically.

Gotcha 5: The Default HTTP Status Codes Differ Slightly

ASP.NET Core’s [ApiController] attribute changes some default behaviors. NestJS has its own defaults:

OperationASP.NET Core defaultNestJS default
GET returning an object200200
POST returning an object200 (unless you return Created())201
PUT returning an object200200
DELETE returning void200 (unless you return NoContent())200

The surprise: NestJS @Post() returns 201 by default. To override, use @HttpCode(HttpStatus.OK). To make DELETE return 204 (the semantically correct response), use @HttpCode(HttpStatus.NO_CONTENT). ASP.NET engineers habitually use return Ok() and return NoContent() — in NestJS, you use @HttpCode() on the method decorator.

Hands-On Exercise

Build a complete Products feature module from scratch without using the CLI’s scaffolding — write each file manually so you understand what the CLI generates.

Requirements:

  • POST /api/products — create a product with name: string, priceCents: number, sku: string
  • GET /api/products — list all products
  • GET /api/products/:id — get one product; return 404 if not found
  • PATCH /api/products/:id — partial update (not PUT — all fields optional)
  • DELETE /api/products/:id — delete; return 204

Then:

  1. Add a StatsModule with a StatsService that has a getProductCount() method
  2. Export StatsService from StatsModule
  3. Import StatsModule into ProductsModule
  4. Inject StatsService into ProductsController and add GET /api/products/stats that returns { count: number }

Verify the dependency graph is correct by checking that you can call the stats endpoint after creating a few products.

Expected file structure when complete:

src/
├── products/
│   ├── dto/
│   │   ├── create-product.dto.ts
│   │   └── update-product.dto.ts
│   ├── product.entity.ts
│   ├── products.controller.ts
│   ├── products.module.ts
│   └── products.service.ts
├── stats/
│   ├── stats.module.ts
│   └── stats.service.ts
├── app.module.ts
└── main.ts

Quick Reference

ASP.NET Core → NestJS Cheat Sheet

ASP.NET CoreNestJSFile Location
Program.cs (builder setup)app.module.tssrc/app.module.ts
Program.cs (app startup)main.tssrc/main.ts
builder.Services.AddScoped<T>()providers: [T] in @Module()The feature’s *.module.ts
builder.Services.AddSingleton<T>()providers: [{ provide: T, useClass: T }] (default scope)Same
builder.Services.AddTransient<T>()@Injectable({ scope: Scope.TRANSIENT })On the class
[ApiController] + [Route("x")]@Controller('x')Controller class
[HttpGet("{id}")]@Get(':id')Controller method
[HttpPost]@Post()Controller method
[HttpPut("{id}")]@Put(':id')Controller method
[HttpDelete("{id}")]@Delete(':id')Controller method
[FromBody]@Body()Method parameter
[FromRoute] / route param@Param('name')Method parameter
[FromQuery]@Query('name')Method parameter
return NotFound()throw new NotFoundException(msg)In service or controller
return BadRequest()throw new BadRequestException(msg)In service or controller
return NoContent()@HttpCode(204) on the methodController method decorator
[Authorize]Guard (see Article 4.2)
IServiceProviderModuleRef (rarely needed directly)Injected service

Generating Files with the CLI

nest generate module <name>      # Creates <name>/<name>.module.ts
nest generate controller <name>  # Creates <name>/<name>.controller.ts
nest generate service <name>     # Creates <name>/<name>.service.ts
nest generate resource <name>    # Creates all of the above + DTOs + CRUD boilerplate

Provider Scope Reference

// Singleton (default) — one instance for the app lifetime
@Injectable()
export class MyService {}

// Request-scoped — new instance per HTTP request
@Injectable({ scope: Scope.REQUEST })
export class RequestService {}

// Transient — new instance per injection point
@Injectable({ scope: Scope.TRANSIENT })
export class TransientService {}

Built-in HTTP Exceptions

// These map directly to HTTP status codes
throw new BadRequestException('message');          // 400
throw new UnauthorizedException('message');        // 401
throw new ForbiddenException('message');           // 403
throw new NotFoundException('message');            // 404
throw new ConflictException('message');            // 409
throw new UnprocessableEntityException('message'); // 422
throw new InternalServerErrorException('message'); // 500

Further Reading

NestJS Middleware, Guards, Interceptors & Pipes

For .NET engineers who know: ASP.NET Core middleware, action filters (IActionFilter, IAsyncActionFilter), authorization filters ([Authorize]), result filters, exception middleware, and model binding with Data Annotations You’ll learn: How NestJS’s request pipeline components map to ASP.NET Core’s filter and middleware system — the names changed, the concepts did not Time: 20-30 min read

The .NET Way (What You Already Know)

ASP.NET Core’s request pipeline is a layered architecture. Middleware runs first in the order you register it. Then, for MVC requests, the filter pipeline takes over: authorization filters, resource filters, model binding, action filters (before and after), result filters, and exception filters.

// Program.cs — Middleware pipeline (runs for every request)
app.UseExceptionHandler("/error");  // Outermost — catches unhandled exceptions
app.UseHttpsRedirection();
app.UseAuthentication();            // Sets HttpContext.User
app.UseAuthorization();             // Checks [Authorize] attributes
app.UseRateLimiter();
app.MapControllers();               // Routes to MVC pipeline
// Global action filter — registered in AddControllers()
builder.Services.AddControllers(options =>
{
    options.Filters.Add<LoggingFilter>();      // Runs around every action
    options.Filters.Add<ValidationFilter>();   // Validates model state
});

// [Authorize] — Authorization filter (runs before action filters)
[Authorize(Roles = "Admin")]
[HttpPost("orders")]
public async Task<IActionResult> CreateOrder([FromBody] CreateOrderDto dto) { /* ... */ }

The execution order in ASP.NET Core MVC:

graph TD
    A1["Request"]
    A2["Middleware pipeline (UseX)"]
    A3["Authorization filters ([Authorize])"]
    A4["Resource filters"]
    A5["Model binding ([FromBody], [FromQuery])"]
    A6["Action filters — OnActionExecuting"]
    A7["Your controller action"]
    A8["Action filters — OnActionExecuted"]
    A9["Result filters — OnResultExecuting"]
    A10["Response written"]
    A11["Result filters — OnResultExecuted"]
    A12["Exception filters (catch anything above)"]

    A1 --> A2 --> A3 --> A4 --> A5 --> A6 --> A7 --> A8 --> A9 --> A10 --> A11
    A2 --> A12

NestJS has the same structure. Every concept has an equivalent; it’s the naming that changed.

The NestJS Way

The Complete Pipeline

graph TD
    N1["Request"]
    N2["Middleware (global and module-level)"]
    N3["Guards (global → controller → handler)"]
    N4["Interceptors — before (global → controller → handler)"]
    N5["Pipes (global → controller → handler)"]
    N6["Controller handler method"]
    N7["Interceptors — after (handler → controller → global, reverse order)"]
    N8["Exception Filters (caught at any point above)"]
    N9["Response"]

    N1 --> N2 --> N3 --> N4 --> N5 --> N6 --> N7 --> N9
    N2 --> N8
    N8 --> N9

Side-by-side with ASP.NET Core:

ASP.NET CoreNestJS EquivalentRuns At
app.Use*() middlewareMiddleware (implements NestMiddleware)Outermost, before the rest of the pipeline
Authorization filter ([Authorize])Guard (implements CanActivate)After middleware, before interceptors
Action filter — beforeInterceptor — beforeAfter guards, before pipes
Model binding ([FromBody], [FromQuery])Pipe (implements PipeTransform)After interceptors, immediately before the handler
Data Annotations / FluentValidationPipe + class-validator or ZodSame position as model binding
Action filter — afterInterceptor — afterAfter handler, before response
Result filterInterceptor — after (response transformation)Same position
Exception middlewareException Filter (implements ExceptionFilter)Catches exceptions from anywhere in the pipeline

Middleware

NestJS middleware is functionally identical to ASP.NET Core middleware. It receives the request and response objects and a next() function.

// logging.middleware.ts
import { Injectable, NestMiddleware } from '@nestjs/common';
import { Request, Response, NextFunction } from 'express';

// Equivalent to writing app.Use(async (context, next) => { ... }) in Program.cs
// Or implementing IMiddleware in .NET
@Injectable()
export class LoggingMiddleware implements NestMiddleware {
  use(req: Request, res: Response, next: NextFunction) {
    const { method, originalUrl } = req;
    const start = Date.now();

    // The equivalent of calling next.Invoke() in .NET middleware
    res.on('finish', () => {
      const ms = Date.now() - start;
      console.log(`${method} ${originalUrl} ${res.statusCode} — ${ms}ms`);
    });

    next(); // Call next middleware/handler
  }
}

Middleware is applied in the module, not globally in main.ts:

// app.module.ts
import { Module, NestModule, MiddlewareConsumer, RequestMethod } from '@nestjs/common';
import { LoggingMiddleware } from './logging.middleware';
import { OrdersModule } from './orders/orders.module';

@Module({
  imports: [OrdersModule],
})
export class AppModule implements NestModule {
  configure(consumer: MiddlewareConsumer) {
    consumer
      .apply(LoggingMiddleware)
      // Apply to all routes — equivalent to app.Use(middleware) before MapControllers()
      .forRoutes('*');

    // Or be specific:
    // .forRoutes({ path: 'orders', method: RequestMethod.ALL })
    // .forRoutes(OrdersController)  // Apply only to OrdersController's routes
  }
}

For simple middleware, you can use a function instead of a class (equivalent to the inline app.Use(async (context, next) => { }) style in .NET):

// Simple function-based middleware
export function corsMiddleware(req: Request, res: Response, next: NextFunction) {
  res.header('Access-Control-Allow-Origin', '*');
  next();
}

// Apply it the same way
consumer.apply(corsMiddleware).forRoutes('*');

Guards — The [Authorize] Equivalent

A Guard answers one question: should this request be allowed to proceed? It returns true to allow or false to block. This is exactly what ASP.NET’s authorization filter does — check the principal, return 403 if unauthorized.

// auth.guard.ts
import {
  Injectable,
  CanActivate,
  ExecutionContext,
  UnauthorizedException,
} from '@nestjs/common';
import { Request } from 'express';

// Equivalent to implementing IAuthorizationFilter in ASP.NET Core
// or using [Authorize] with a custom AuthorizationRequirement
@Injectable()
export class AuthGuard implements CanActivate {
  canActivate(context: ExecutionContext): boolean {
    const request = context.switchToHttp().getRequest<Request>();
    const token = this.extractToken(request);

    if (!token) {
      // Equivalent to context.Result = new UnauthorizedResult() in ASP.NET
      throw new UnauthorizedException('No token provided');
    }

    // Validate the token (in a real app: verify JWT signature, check expiry, etc.)
    // See Article 4.3 for the Clerk integration
    const payload = this.validateToken(token);
    if (!payload) {
      throw new UnauthorizedException('Invalid token');
    }

    // Attach user to request (equivalent to HttpContext.User in ASP.NET)
    request['user'] = payload;
    return true;
  }

  private extractToken(request: Request): string | null {
    const [type, token] = request.headers.authorization?.split(' ') ?? [];
    return type === 'Bearer' ? token : null;
  }

  private validateToken(token: string): Record<string, unknown> | null {
    // Real implementation in Article 4.3
    return token === 'valid-token' ? { userId: 1, roles: ['admin'] } : null;
  }
}

Apply guards at three levels, mirroring ASP.NET’s global/controller/action filter scopes:

// Global — equivalent to AddControllers(options => options.Filters.Add<AuthFilter>())
// In main.ts:
const app = await NestFactory.create(AppModule);
app.useGlobalGuards(new AuthGuard());

// Controller level — equivalent to [Authorize] on the controller class
@Controller('orders')
@UseGuards(AuthGuard)   // All methods in this controller require auth
export class OrdersController { /* ... */ }

// Method level — equivalent to [Authorize] on a specific action
@Get('admin-only')
@UseGuards(AuthGuard, RolesGuard)  // Multiple guards — all must pass
adminOnly() { /* ... */ }

// Allow anonymous on a specific method when the controller is globally guarded
// Equivalent to [AllowAnonymous] in ASP.NET
@Get('public')
@Public()  // Custom decorator that sets metadata (shown below)
publicEndpoint() { /* ... */ }

The @Public() decorator pattern is the NestJS equivalent of [AllowAnonymous]:

// public.decorator.ts — custom metadata decorator
import { SetMetadata } from '@nestjs/common';
export const IS_PUBLIC_KEY = 'isPublic';
export const Public = () => SetMetadata(IS_PUBLIC_KEY, true);

// Update AuthGuard to check for this metadata:
import { Reflector } from '@nestjs/core';

@Injectable()
export class AuthGuard implements CanActivate {
  constructor(private reflector: Reflector) {}

  canActivate(context: ExecutionContext): boolean {
    // Check if the route is marked @Public() — if so, skip auth
    const isPublic = this.reflector.getAllAndOverride<boolean>(IS_PUBLIC_KEY, [
      context.getHandler(),   // Check the method first
      context.getClass(),     // Then check the controller class
    ]);
    if (isPublic) return true;

    // ... rest of auth logic
    const request = context.switchToHttp().getRequest<Request>();
    const token = this.extractToken(request);
    if (!token) throw new UnauthorizedException();
    // ...
    return true;
  }
}

Role-based authorization — the [Authorize(Roles = "Admin")] equivalent:

// roles.decorator.ts
import { SetMetadata } from '@nestjs/common';
export const Roles = (...roles: string[]) => SetMetadata('roles', roles);

// roles.guard.ts
@Injectable()
export class RolesGuard implements CanActivate {
  constructor(private reflector: Reflector) {}

  canActivate(context: ExecutionContext): boolean {
    const requiredRoles = this.reflector.getAllAndOverride<string[]>('roles', [
      context.getHandler(),
      context.getClass(),
    ]);

    // No @Roles() decorator — allow through
    if (!requiredRoles || requiredRoles.length === 0) return true;

    const request = context.switchToHttp().getRequest<Request>();
    const user = request['user']; // Set by AuthGuard earlier in the pipeline

    if (!user) throw new UnauthorizedException();

    const hasRole = requiredRoles.some((role) =>
      (user.roles as string[]).includes(role),
    );
    if (!hasRole) throw new ForbiddenException('Insufficient permissions');

    return true;
  }
}

// Usage — equivalent to [Authorize(Roles = "Admin")]
@Get('reports')
@UseGuards(AuthGuard, RolesGuard)
@Roles('admin')
getAdminReports() { /* ... */ }

Interceptors — Action Filters

Interceptors wrap the request handler — they can execute code before the handler, observe the result, transform it, or intercept errors. This maps directly to IAsyncActionFilter’s OnActionExecutionAsync:

// C# — async action filter
public class LoggingFilter : IAsyncActionFilter
{
    public async Task OnActionExecutionAsync(
        ActionExecutingContext context,
        ActionExecutionDelegate next)
    {
        var sw = Stopwatch.StartNew();
        var result = await next(); // Calls the action
        sw.Stop();
        _logger.LogInformation("Completed in {Ms}ms", sw.ElapsedMilliseconds);
    }
}
// TypeScript — equivalent NestJS interceptor
import {
  Injectable,
  NestInterceptor,
  ExecutionContext,
  CallHandler,
} from '@nestjs/common';
import { Observable, tap } from 'rxjs';

@Injectable()
export class LoggingInterceptor implements NestInterceptor {
  intercept(context: ExecutionContext, next: CallHandler): Observable<unknown> {
    const { method, url } = context.switchToHttp().getRequest();
    const start = Date.now();

    return next.handle().pipe(
      // tap() runs AFTER the handler completes (like OnActionExecuted)
      tap(() => {
        const ms = Date.now() - start;
        console.log(`${method} ${url} — ${ms}ms`);
      }),
    );
  }
}

The Observable return type and RxJS pipe operators replace the async continuation pattern. next.handle() is equivalent to await next() in the C# filter — it returns an observable that emits the handler’s response. The RxJS operators let you transform, delay, or replace that response.

A response transformation interceptor — the equivalent of a result filter that wraps all responses in an envelope:

// C# — result filter wrapping responses
public class ResponseEnvelopeFilter : IResultFilter
{
    public void OnResultExecuting(ResultExecutingContext context)
    {
        if (context.Result is ObjectResult obj)
        {
            context.Result = new ObjectResult(new { data = obj.Value, success = true });
        }
    }
    public void OnResultExecuted(ResultExecutedContext context) { }
}
// TypeScript — interceptor wrapping responses in an envelope
import { map } from 'rxjs';

export interface ResponseEnvelope<T> {
  data: T;
  success: boolean;
  timestamp: string;
}

@Injectable()
export class ResponseEnvelopeInterceptor<T>
  implements NestInterceptor<T, ResponseEnvelope<T>>
{
  intercept(
    context: ExecutionContext,
    next: CallHandler,
  ): Observable<ResponseEnvelope<T>> {
    return next.handle().pipe(
      map((data) => ({
        data,
        success: true,
        timestamp: new Date().toISOString(),
      })),
    );
  }
}

Caching interceptor — intercept before the handler to short-circuit if cached:

@Injectable()
export class CacheInterceptor implements NestInterceptor {
  constructor(private readonly cacheService: CacheService) {}

  async intercept(
    context: ExecutionContext,
    next: CallHandler,
  ): Promise<Observable<unknown>> {
    const request = context.switchToHttp().getRequest<Request>();
    const key = `${request.method}:${request.url}`;

    const cached = await this.cacheService.get(key);
    if (cached) {
      // Short-circuit — return cached value without calling the handler
      // Equivalent to: context.Result = new ObjectResult(cached); return; in C#
      return of(cached);
    }

    return next.handle().pipe(
      tap(async (data) => {
        await this.cacheService.set(key, data, 60); // Cache for 60 seconds
      }),
    );
  }
}

Apply interceptors at all three levels, same as guards:

// Global — in main.ts
app.useGlobalInterceptors(new LoggingInterceptor());

// Controller level
@Controller('orders')
@UseInterceptors(LoggingInterceptor)
export class OrdersController { /* ... */ }

// Method level
@Get(':id')
@UseInterceptors(CacheInterceptor)
findOne(@Param('id', ParseIntPipe) id: number) { /* ... */ }

Pipes — Model Binding and Validation

Pipes are the model binding + validation layer. In ASP.NET Core, model binding converts the HTTP request into typed parameters and Data Annotations validate the bound model. NestJS pipes do the same thing in one step.

Built-in pipes that map to ASP.NET’s automatic type conversion:

// ParseIntPipe — equivalent to routing a parameter declared as int in C#
// Throws 400 BadRequest if the parameter is not a valid integer
@Get(':id')
findOne(@Param('id', ParseIntPipe) id: number) { /* ... */ }

// ParseUUIDPipe — validates UUID format
@Get(':id')
findOne(@Param('id', ParseUUIDPipe) id: string) { /* ... */ }

// ParseBoolPipe — 'true'/'false' string to boolean
@Get()
findAll(@Query('active', ParseBoolPipe) active: boolean) { /* ... */ }

// DefaultValuePipe — like a default parameter value
@Get()
findAll(
  @Query('page', new DefaultValuePipe(1), ParseIntPipe) page: number,
  @Query('limit', new DefaultValuePipe(10), ParseIntPipe) limit: number,
) { /* ... */ }

For DTO validation, NestJS uses class-validator (attribute-based, like Data Annotations) or Zod. We recommend Zod for new projects because it co-locates the type definition and the validation schema, eliminating the redundancy of separate DTO classes and validation attributes.

Approach 1: class-validator (familiar to .NET engineers)

pnpm add class-validator class-transformer
// create-order.dto.ts — class-validator approach (like Data Annotations)
import { IsNumber, IsArray, IsOptional, IsString, Min, ValidateNested } from 'class-validator';
import { Type } from 'class-transformer';

export class OrderItemDto {
  @IsNumber()
  productId: number;

  @IsNumber()
  @Min(1)
  quantity: number;

  @IsNumber()
  @Min(0)
  unitPriceCents: number;
}

export class CreateOrderDto {
  @IsNumber()
  customerId: number;

  @IsArray()
  @ValidateNested({ each: true })
  @Type(() => OrderItemDto)
  items: OrderItemDto[];

  @IsString()
  @IsOptional()
  notes?: string;
}
// Enable the validation pipe globally in main.ts
// Equivalent to [ApiController] which auto-validates ModelState in ASP.NET
import { ValidationPipe } from '@nestjs/common';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  app.useGlobalPipes(
    new ValidationPipe({
      whitelist: true,       // Strip properties not in the DTO (like [Bind(Include=...)])
      forbidNonWhitelisted: true, // Return 400 if extra properties are sent
      transform: true,       // Transform plain objects to DTO instances (needed for class-validator)
    }),
  );
  await app.listen(3000);
}

Approach 2: Zod (recommended — schema and type in one place)

pnpm add zod
// orders/dto/create-order.dto.ts — Zod approach
import { z } from 'zod';

export const OrderItemSchema = z.object({
  productId: z.number().int().positive(),
  quantity: z.number().int().min(1),
  unitPriceCents: z.number().int().min(0),
});

export const CreateOrderSchema = z.object({
  customerId: z.number().int().positive(),
  items: z.array(OrderItemSchema).min(1),
  notes: z.string().optional(),
});

// The TypeScript type is inferred from the schema — no separate interface needed
// Equivalent to: your DTO class IS also your validation definition
export type CreateOrderDto = z.infer<typeof CreateOrderSchema>;
// zod-validation.pipe.ts — custom pipe that validates using a Zod schema
import {
  PipeTransform,
  Injectable,
  ArgumentMetadata,
  BadRequestException,
} from '@nestjs/common';
import { ZodSchema, ZodError } from 'zod';

@Injectable()
export class ZodValidationPipe implements PipeTransform {
  constructor(private schema: ZodSchema) {}

  transform(value: unknown, _metadata: ArgumentMetadata) {
    const result = this.schema.safeParse(value);
    if (!result.success) {
      // Format Zod errors like ASP.NET's ModelState validation errors
      const errors = (result.error as ZodError).errors.map((e) => ({
        field: e.path.join('.'),
        message: e.message,
      }));
      throw new BadRequestException({ message: 'Validation failed', errors });
    }
    return result.data;
  }
}
// Use the pipe on a specific body parameter
@Post()
create(
  @Body(new ZodValidationPipe(CreateOrderSchema)) dto: CreateOrderDto,
) {
  return this.ordersService.create(dto);
}

Custom pipes for domain-specific transformations:

// trim-strings.pipe.ts — trim whitespace from all string properties
// Equivalent to a custom model binder in ASP.NET Core
@Injectable()
export class TrimStringsPipe implements PipeTransform {
  transform(value: unknown): unknown {
    if (typeof value === 'string') return value.trim();
    if (typeof value === 'object' && value !== null) {
      return Object.fromEntries(
        Object.entries(value).map(([k, v]) => [
          k,
          typeof v === 'string' ? v.trim() : v,
        ]),
      );
    }
    return value;
  }
}

Exception Filters

Exception filters catch unhandled exceptions from anywhere in the pipeline and convert them to HTTP responses. In ASP.NET Core, this is a combination of exception middleware (UseExceptionHandler) and exception filters (IExceptionFilter).

NestJS’s built-in global exception filter already handles HttpException subclasses (all the NotFoundException, BadRequestException, etc.) and converts unrecognized exceptions to 500 Internal Server Error. You typically add custom exception filters for:

  1. Transforming ORM-specific exceptions (Prisma errors, DB constraint violations) into meaningful HTTP responses
  2. Adding structured error logging before the response is sent
  3. Customizing the error response format
// http-exception.filter.ts — custom exception filter
// Equivalent to implementing IExceptionFilter in ASP.NET or UseExceptionHandler middleware
import {
  ExceptionFilter,
  Catch,
  ArgumentsHost,
  HttpException,
  HttpStatus,
  Logger,
} from '@nestjs/common';
import { Request, Response } from 'express';

// @Catch(HttpException) — only catches HttpException and subclasses
// @Catch() with no argument catches ALL exceptions
@Catch(HttpException)
export class HttpExceptionFilter implements ExceptionFilter {
  private readonly logger = new Logger(HttpExceptionFilter.name);

  catch(exception: HttpException, host: ArgumentsHost) {
    const ctx = host.switchToHttp();
    const response = ctx.getResponse<Response>();
    const request = ctx.getRequest<Request>();
    const status = exception.getStatus();
    const exceptionResponse = exception.getResponse();

    // Structure the error response (equivalent to ProblemDetails in ASP.NET)
    const errorBody = {
      statusCode: status,
      timestamp: new Date().toISOString(),
      path: request.url,
      message:
        typeof exceptionResponse === 'string'
          ? exceptionResponse
          : (exceptionResponse as Record<string, unknown>).message,
    };

    if (status >= 500) {
      this.logger.error(`${request.method} ${request.url}`, exception.stack);
    }

    response.status(status).json(errorBody);
  }
}

For Prisma-specific errors (analogous to catching DbUpdateConcurrencyException or SqlException in .NET):

// prisma-exception.filter.ts
import { Catch, ArgumentsHost, HttpStatus } from '@nestjs/common';
import { Prisma } from '@prisma/client';

@Catch(Prisma.PrismaClientKnownRequestError)
export class PrismaExceptionFilter implements ExceptionFilter {
  catch(exception: Prisma.PrismaClientKnownRequestError, host: ArgumentsHost) {
    const response = host.switchToHttp().getResponse();

    // Prisma error codes: https://www.prisma.io/docs/reference/api-reference/error-reference
    switch (exception.code) {
      case 'P2002':
        // Unique constraint violation — equivalent to catching SqlException 2627 in .NET
        return response.status(HttpStatus.CONFLICT).json({
          statusCode: 409,
          message: 'A record with this value already exists',
          field: exception.meta?.target,
        });
      case 'P2025':
        // Record not found — equivalent to catching NotFoundException
        return response.status(HttpStatus.NOT_FOUND).json({
          statusCode: 404,
          message: 'Record not found',
        });
      default:
        return response.status(HttpStatus.INTERNAL_SERVER_ERROR).json({
          statusCode: 500,
          message: 'Database error',
        });
    }
  }
}

Register exception filters globally or at any scope:

// Global — in main.ts
app.useGlobalFilters(new HttpExceptionFilter(), new PrismaExceptionFilter());

// Controller level
@Controller('orders')
@UseFilters(HttpExceptionFilter)
export class OrdersController { /* ... */ }

// Method level
@Post()
@UseFilters(PrismaExceptionFilter)
create(@Body() dto: CreateOrderDto) { /* ... */ }

Execution Order in Practice

Building on the diagram from the introduction, here is the same request traced through both frameworks side-by-side.

Request: POST /api/orders (authenticated user, valid body)

ASP.NET Core execution trace:

graph TD
    AE1["1. UseExceptionHandler\n(outermost middleware — wraps everything)"]
    AE2["2. UseAuthentication\n(sets HttpContext.User from JWT)"]
    AE3["3. UseAuthorization\n(checks [Authorize] attribute)"]
    AE4["4. LoggingFilter.OnActionExecuting"]
    AE5["5. ValidationFilter.OnActionExecuting\n(checks ModelState)"]
    AE6["6. [FromBody] model binding"]
    AE7["7. OrdersController.Create() executes"]
    AE8["8. LoggingFilter.OnActionExecuted"]
    AE9["9. ResponseEnvelopeFilter.OnResultExecuting"]
    AE10["10. Response written"]
    AE11["11. ResponseEnvelopeFilter.OnResultExecuted"]
    AE1 --> AE2 --> AE3 --> AE4 --> AE5 --> AE6 --> AE7 --> AE8 --> AE9 --> AE10 --> AE11

NestJS execution trace:

graph TD
    NE1["1. LoggingMiddleware.use()\n(middleware — outermost)"]
    NE2["2. AuthGuard.canActivate() (guards)"]
    NE3["3. RolesGuard.canActivate() (guards, if present)"]
    NE4["4. LoggingInterceptor.intercept() — before next.handle()"]
    NE5["5. ResponseEnvelopeInterceptor.intercept() — before next.handle()"]
    NE6["6. ZodValidationPipe.transform() (pipes)"]
    NE7["7. ParseIntPipe.transform() (other pipes)"]
    NE8["8. OrdersController.create() executes"]
    NE9["9. ResponseEnvelopeInterceptor.intercept() — after next.handle() (reverse order)"]
    NE10["10. LoggingInterceptor.intercept() — after next.handle() (reverse order)"]
    NE11["11. Response written"]
    NE1 --> NE2 --> NE3 --> NE4 --> NE5 --> NE6 --> NE7 --> NE8 --> NE9 --> NE10 --> NE11

Note that interceptors wrap in reverse order on the way back — the last interceptor applied is the first to process the response. This is identical to how the middleware pipeline unwinds in .NET.

Rate Limiting — Practical Middleware Example

Rate limiting in ASP.NET Core (added natively in .NET 7) is middleware. In NestJS, it’s typically a guard using a library:

pnpm add @nestjs/throttler
// app.module.ts — Configure the throttler module
import { ThrottlerModule, ThrottlerGuard } from '@nestjs/throttler';
import { APP_GUARD } from '@nestjs/core';

@Module({
  imports: [
    ThrottlerModule.forRoot([{
      name: 'default',
      ttl: 60000,   // 60 seconds (milliseconds)
      limit: 100,   // 100 requests per window
    }]),
  ],
  providers: [
    {
      // Register ThrottlerGuard globally — applies to all endpoints
      // Equivalent to: app.UseRateLimiter() + global rate limit policy
      provide: APP_GUARD,
      useClass: ThrottlerGuard,
    },
  ],
})
export class AppModule {}

// Override the limit on specific endpoints
@Post('register')
@Throttle({ default: { limit: 5, ttl: 60000 } }) // Only 5 per minute
register(@Body() dto: RegisterDto) { /* ... */ }

// Skip rate limiting on specific endpoints
@Get('health')
@SkipThrottle()
healthCheck() { return { status: 'ok' }; }

Key Differences

ConceptASP.NET CoreNestJSImportant Nuance
Cross-cutting concern modelMiddleware + Filter pipelineMiddleware + Guards + Interceptors + Pipes + Exception FiltersMore granular naming in NestJS
Authorization checkAuthorization filter ([Authorize])Guard (implements CanActivate)Guards can also be used for non-auth logic
Before/after action logicAction filter (IActionFilter)Interceptor (implements NestInterceptor)Interceptors use RxJS Observable chain
Model bindingFramework-automatic, attribute-drivenPipe-driven, explicitMust add ValidationPipe; not automatic
ValidationData Annotations (auto) + FluentValidationclass-validator via ValidationPipe or Zod pipeMust opt in explicitly
Error to HTTP responsereturn NotFound() or exception filterthrow new NotFoundException() or exception filterServices throw; filters catch
Global filter registrationoptions.Filters.Add<T>() in AddControllers()app.useGlobalGuards/Interceptors/Pipes/Filters()Separate registration per pipeline component type
Decorator metadata readingReflection over attributesReflector.getAllAndOverride()Explicit metadata reading required for custom decorators
Filter/guard orderingConfiguration orderOuter-to-inner on entry, inner-to-outer on exitConsistent with middleware pipeline

The most significant difference: in ASP.NET Core, model validation happens automatically for all [ApiController] controllers. In NestJS, you must explicitly add ValidationPipe globally or per-endpoint. Forgetting this is the most common bug new NestJS engineers encounter from the .NET world — your DTOs will silently accept invalid data.

Gotchas for .NET Engineers

Gotcha 1: Validation Does Not Happen Automatically

In ASP.NET Core, [ApiController] automatically validates the model and returns 400 if validation fails. In NestJS, there is no equivalent automatic validation. If you send invalid data to an endpoint without a ValidationPipe, NestJS will happily pass the invalid data to your controller.

// WRONG — The DTO has class-validator decorators, but without ValidationPipe,
// they are completely ignored. Invalid data reaches your service unchanged.
@Post()
create(@Body() dto: CreateOrderDto) {
  return this.ordersService.create(dto);  // dto.customerId might be undefined
}
// CORRECT — Add ValidationPipe globally in main.ts (do this once, up front)
async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  app.useGlobalPipes(
    new ValidationPipe({
      whitelist: true,           // Strip unknown properties
      forbidNonWhitelisted: true, // 400 if unknown properties are sent
      transform: true,           // Transform plain objects to class instances
    }),
  );
  await app.listen(3000);
}

Do this in main.ts on the first day of every project. Treat it as equivalent to the automatic validation that [ApiController] provides in .NET. Without it, your DTOs are decoration only.

Gotcha 2: Guards Cannot Access the Response Object to Short-Circuit with Custom Data

In ASP.NET Core authorization filters, you can write to the response directly:

context.Result = new JsonResult(new { error = "Custom auth error" })
{
    StatusCode = 403
};
return; // Short-circuit

In NestJS guards, you can only return true or false (or throw an exception). You cannot write a custom response body from a guard — you can only throw a ForbiddenException and let the exception filter handle the formatting.

// WRONG — Guards cannot set a custom response body
canActivate(context: ExecutionContext): boolean {
  // You cannot do this:
  // context.switchToHttp().getResponse().status(403).json({ custom: 'error' });
  return false; // This results in a generic 403 with the default error format
}

// CORRECT — Throw an exception; the exception filter formats the response
canActivate(context: ExecutionContext): boolean {
  if (!this.isAuthorized(context)) {
    throw new ForbiddenException('You do not have access to this resource');
    // Or: throw new HttpException({ custom: 'structure' }, 403);
  }
  return true;
}

Gotcha 3: Interceptors Use RxJS — Most Teams Use a Small Subset

NestJS interceptors use RxJS Observable. If you have not worked with RxJS before, this is a learning curve. The good news: in practice, you only need three or four RxJS operators for the vast majority of interceptor use cases.

import { map, tap, catchError, of, throwError } from 'rxjs';

// The four operators you'll use 95% of the time:

// map() — transform the response (like Select() in LINQ)
return next.handle().pipe(
  map((data) => ({ data, success: true })),
);

// tap() — side effect without transforming (like a callback)
return next.handle().pipe(
  tap((data) => this.cache.set(key, data)),
);

// catchError() — intercept errors (like try/catch in the response chain)
return next.handle().pipe(
  catchError((err) => {
    this.logger.error(err);
    return throwError(() => err); // Re-throw after logging
  }),
);

// of() — return an immediate value (for short-circuiting)
const cached = await this.cache.get(key);
if (cached) return of(cached);  // Return cached value, skip the handler

You do not need to understand cold vs. hot observables, subjects, multicasting, or any of RxJS’s advanced operators to write effective NestJS interceptors. Learn map, tap, catchError, and of. That is enough for years of production NestJS work.

Gotcha 4: The ExecutionContext Has to Be Switched to the Right Protocol

NestJS is designed to work across multiple protocols: HTTP, WebSockets, and gRPC. The ExecutionContext passed to guards, interceptors, and exception filters is protocol-agnostic. You must explicitly switch it to get the HTTP request/response objects:

// WRONG — switchToHttp() is required; you can't access .getRequest() directly
canActivate(context: ExecutionContext): boolean {
  const request = context.getRequest(); // This doesn't exist
  // ...
}

// CORRECT
canActivate(context: ExecutionContext): boolean {
  const request = context.switchToHttp().getRequest<Request>();
  const response = context.switchToHttp().getResponse<Response>();
  // ...
}

This feels verbose for HTTP-only applications (which is most NestJS apps). It’s designed for flexibility. Accept it and move on.

Gotcha 5: Guard Execution Order Matters, and Injected Guards Behave Differently from useGlobalGuards()

When you register a guard globally via app.useGlobalGuards(new AuthGuard()), you pass an instantiated object — the guard cannot receive injected dependencies because it’s created outside the NestJS DI context.

// WRONG — AuthGuard needs JwtService injected, but this bypasses DI
app.useGlobalGuards(new AuthGuard()); // JwtService won't be injected

// CORRECT — Register globally through the DI system using APP_GUARD
import { APP_GUARD } from '@nestjs/core';

@Module({
  providers: [
    {
      provide: APP_GUARD,
      useClass: AuthGuard, // NestJS creates this through DI — all dependencies are resolved
    },
  ],
})
export class AppModule {}

The same applies to useGlobalInterceptors, useGlobalPipes, and useGlobalFilters. If your global component needs dependencies (loggers, config, database access), register it via the APP_GUARD / APP_INTERCEPTOR / APP_PIPE / APP_FILTER provider tokens in the root module instead of passing an instance to useGlobal*().

Gotcha 6: Exception Filters Must Be Ordered Correctly — More Specific First

When registering multiple exception filters, the more specific filter must come first. If a generic catch-all is registered first, it catches everything and the specific filters never run.

// WRONG — HttpExceptionFilter (generic) is registered before PrismaExceptionFilter (specific)
// PrismaClientKnownRequestError extends Error, not HttpException,
// so HttpExceptionFilter won't catch it — but the order still matters for catch-all filters
app.useGlobalFilters(
  new HttpExceptionFilter(),     // Catches HttpException
  new PrismaExceptionFilter(),   // Catches Prisma errors — this is fine in this case
);

// The real issue arises with a catch-all filter:
app.useGlobalFilters(
  new CatchAllFilter(),    // @Catch() — catches EVERYTHING — this must come LAST
  new HttpExceptionFilter(), // This will never run — CatchAllFilter intercepts everything first
);

// CORRECT — Most specific last, catch-all first... wait, no.
// Actually in NestJS, filters registered last have higher priority (innermost).
// The LAST registered global filter wraps the others.
// Put catch-all LAST:
app.useGlobalFilters(
  new HttpExceptionFilter(),     // More specific
  new PrismaExceptionFilter(),   // More specific
  // new CatchAllFilter(),       // Would need to be here if you want it to catch what the above miss
);

The priority order is counterintuitive: the last-registered global filter runs first (it’s the innermost wrapper). Check the NestJS docs for the current behavior when mixing global filters with controller/method-level filters, as the binding order interacts with scope.

Hands-On Exercise

You’re building a protected API for a multi-tenant SaaS product. Implement the following pipeline components from scratch:

1. Request ID Middleware Create middleware that attaches a unique request ID to every request (use crypto.randomUUID()) and adds it to the response headers as X-Request-ID. If the request already has an X-Request-ID header, use that value instead (for tracing across services).

2. JWT Auth Guard Create a guard that:

  • Reads the Authorization: Bearer <token> header
  • Validates the token (for this exercise, use a hardcoded SECRET_TOKEN = 'dev-secret')
  • Attaches { userId: 1, roles: ['user'] } to request.user if valid
  • Throws UnauthorizedException if the token is missing or invalid
  • Skips validation for routes decorated with @Public()

3. Timing Interceptor Create an interceptor that:

  • Records the start time before the handler runs
  • Adds an X-Response-Time: <ms>ms header to the response
  • Logs the route and timing at the DEBUG level

4. Zod Validation Pipe Implement the ZodValidationPipe from the article and wire it to a POST /items endpoint with this schema:

const CreateItemSchema = z.object({
  name: z.string().min(1).max(100),
  priceCents: z.number().int().min(0),
  sku: z.string().regex(/^[A-Z0-9-]{3,20}$/),
});

Verify that:

  • POST /items with Authorization: Bearer dev-secret and a valid body returns 201
  • POST /items without auth returns 401
  • POST /items with auth but invalid body (e.g., negative price) returns 400 with field-level errors
  • GET /health (decorated with @Public()) returns 200 without auth

Quick Reference

Pipeline Component Comparison

ComponentInterfaceDecoratorRegistered ViaASP.NET Equivalent
MiddlewareNestMiddlewareMiddlewareConsumer.apply()app.Use*()
GuardCanActivate@UseGuards()APP_GUARD or useGlobalGuards()[Authorize] / IAuthorizationFilter
InterceptorNestInterceptor@UseInterceptors()APP_INTERCEPTOR or useGlobalInterceptors()IAsyncActionFilter
PipePipeTransform@UsePipes()APP_PIPE or useGlobalPipes()Model binding + validation
Exception FilterExceptionFilter@UseFilters()APP_FILTER or useGlobalFilters()IExceptionFilter / exception middleware

Scope Registration Patterns

// Global (via main.ts — no DI injection available)
app.useGlobalGuards(new AuthGuard());
app.useGlobalInterceptors(new LoggingInterceptor());
app.useGlobalPipes(new ValidationPipe({ whitelist: true, transform: true }));
app.useGlobalFilters(new HttpExceptionFilter());

// Global (via module — DI injection works here — PREFERRED for components with dependencies)
@Module({
  providers: [
    { provide: APP_GUARD, useClass: AuthGuard },
    { provide: APP_INTERCEPTOR, useClass: LoggingInterceptor },
    { provide: APP_PIPE, useClass: ValidationPipe },
    { provide: APP_FILTER, useClass: HttpExceptionFilter },
  ],
})
export class AppModule {}

// Controller level
@Controller('orders')
@UseGuards(AuthGuard)
@UseInterceptors(LoggingInterceptor)
export class OrdersController { /* ... */ }

// Method level
@Post()
@UseGuards(RolesGuard)
@UseInterceptors(CacheInterceptor)
@UsePipes(new ZodValidationPipe(CreateOrderSchema))
create(@Body() dto: CreateOrderDto) { /* ... */ }

Built-in Pipes Reference

ParseIntPipe        // string → number (integer)
ParseFloatPipe      // string → number (float)
ParseBoolPipe       // 'true'/'false' → boolean
ParseArrayPipe      // comma-separated string → array
ParseUUIDPipe       // validates UUID format
ParseEnumPipe       // validates enum membership
DefaultValuePipe    // provides default if value is undefined
ValidationPipe      // runs class-validator decorators

Built-in HTTP Exceptions Reference

BadRequestException          // 400
UnauthorizedException        // 401
ForbiddenException           // 403
NotFoundException             // 404
MethodNotAllowedException     // 405
NotAcceptableException        // 406
RequestTimeoutException       // 408
ConflictException             // 409
GoneException                 // 410
PayloadTooLargeException      // 413
UnsupportedMediaTypeException // 415
UnprocessableEntityException  // 422
InternalServerErrorException  // 500
NotImplementedException       // 501
BadGatewayException           // 502
ServiceUnavailableException   // 503
GatewayTimeoutException       // 504

RxJS Operators Used in Interceptors

map((data) => transform(data))           // Transform the response value
tap((data) => sideEffect(data))          // Side effect; doesn't change the value
catchError((err) => handleError(err))    // Intercept errors; must return Observable
throwError(() => new Error('...'))       // Re-throw or throw new error
of(value)                                // Emit an immediate value (for cache hits)

Further Reading

Authentication & Authorization with Clerk

For .NET engineers who know: ASP.NET Identity, cookie/JWT middleware (AddAuthentication, AddJwtBearer), [Authorize] attributes, claims-based identity, and UserManager<T> You’ll learn: What Clerk is, why it replaces the auth stack you’d build yourself, and how to wire it across a Next.js frontend and a NestJS API with full role-based authorization Time: 15-20 min read

The .NET Way (What You Already Know)

In the .NET world, authentication is a first-class framework concern. ASP.NET Identity is a full membership system: user store (usually SQL Server), password hashing, email confirmation, password reset, lockout, two-factor authentication, and external OAuth providers. You register it, run migrations, and get a complete user management system.

// Program.cs — ASP.NET Identity + JWT setup
builder.Services.AddDbContext<ApplicationDbContext>();
builder.Services.AddIdentity<ApplicationUser, IdentityRole>()
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders();

builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
    .AddJwtBearer(options =>
    {
        options.TokenValidationParameters = new TokenValidationParameters
        {
            ValidateIssuer = true,
            ValidIssuer = config["Jwt:Issuer"],
            ValidateAudience = true,
            ValidAudience = config["Jwt:Audience"],
            ValidateLifetime = true,
            IssuerSigningKey = new SymmetricSecurityKey(
                Encoding.UTF8.GetBytes(config["Jwt:Key"]!)),
        };
    });

builder.Services.AddAuthorization(options =>
{
    options.AddPolicy("AdminOnly", policy => policy.RequireRole("Admin"));
});
// ProtectedController.cs
[Authorize]
[ApiController]
[Route("api/[controller]")]
public class ProfileController : ControllerBase
{
    [HttpGet]
    public IActionResult GetProfile()
    {
        var userId = User.FindFirst(ClaimTypes.NameIdentifier)?.Value;
        return Ok(new { userId });
    }

    [Authorize(Policy = "AdminOnly")]
    [HttpDelete("{id}")]
    public IActionResult DeleteUser(string id) { /* ... */ }
}

This works, and .NET engineers know it well. But before Clerk existed, this was the minimum viable auth stack: ASP.NET Identity + JWT + email confirmation + password reset + social login integration + 2FA + token rotation. Each feature adds code, and the infrastructure has to be secured, monitored, and maintained. Clerk replaces all of it with a service.

The Clerk Way

What Clerk Provides

Clerk is a hosted authentication and user management service. It handles:

  • Sign-up and sign-in flows (email/password, magic links, social OAuth — Google, GitHub, etc.)
  • Multifactor authentication (TOTP, SMS)
  • Session management and token rotation
  • User profile management (email change, password change, connected accounts)
  • Organizations and roles (multi-tenant applications)
  • User metadata storage (arbitrary JSON attached to users)
  • Webhook delivery for user lifecycle events
  • Prebuilt UI components (sign-in/sign-up forms) or headless APIs

What you own: none of the above. You configure it in a dashboard, embed a few components, validate Clerk’s JWTs in your API, and you have a production-grade auth system in an afternoon.

The analogy in .NET terms: Clerk is roughly Azure AD B2C plus ASP.NET Identity’s user management UI, but simpler to configure, with better developer experience, and priced for startups.

What Clerk is not:

  • It is not an authorization system for your domain resources (whether User A can view Order B is still your code)
  • It is not free at scale (the free tier is generous; check pricing for your user volume)
  • It is not self-hosted (your user data lives in Clerk’s infrastructure — evaluate this for compliance requirements)

The Auth Flow

Understanding the full flow before writing code:

1. User visits your Next.js app (unauthenticated)
2. Next.js renders Clerk's <SignIn /> component (or redirects to Clerk's hosted sign-in page)
3. User signs in — Clerk authenticates, creates a session, issues a JWT
4. Clerk's session cookie + JWT are stored in the browser
5. Next.js server components read the session via Clerk's auth() helper
6. When the frontend calls your NestJS API, it sends the Clerk JWT in the Authorization header
7. NestJS verifies the JWT against Clerk's public keys
8. NestJS reads the userId and metadata from the JWT claims
9. NestJS authorizes the request based on roles/metadata

This is the same flow as ASP.NET Identity + JWT — the difference is that steps 2-4 are handled by Clerk rather than your code.

Setting Up Clerk

Step 1: Create a Clerk application

  1. Go to clerk.com and sign in
  2. Create a new application — choose the sign-in methods you want (email, Google, GitHub, etc.)
  3. Copy the API keys from the dashboard
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=pk_test_...
CLERK_SECRET_KEY=sk_test_...

The publishable key is safe for the browser (it’s in NEXT_PUBLIC_ prefix — see Article 1.9 on Next.js environment variables). The secret key stays on the server.

Step 2: Install Clerk in your Next.js app

pnpm add @clerk/nextjs

Step 3: Wrap your app with ClerkProvider

// app/layout.tsx — root layout
import { ClerkProvider } from '@clerk/nextjs';
import type { ReactNode } from 'react';

export default function RootLayout({ children }: { children: ReactNode }) {
  return (
    <ClerkProvider>
      <html lang="en">
        <body>{children}</body>
      </html>
    </ClerkProvider>
  );
}

This is analogous to enabling authentication middleware in ASP.NET Core — it makes the session available throughout the application.

Step 4: Protect routes with middleware

// middleware.ts — in the root of your Next.js app (not inside /app or /src)
import { clerkMiddleware, createRouteMatcher } from '@clerk/nextjs/server';

// Define which routes require authentication
const isProtectedRoute = createRouteMatcher([
  '/dashboard(.*)',   // Matches /dashboard and all sub-paths
  '/settings(.*)',
  '/api/(.*)(?<!^/api/webhooks)',  // All API routes except webhooks
]);

// This middleware runs on every request — equivalent to app.UseAuthentication() + app.UseAuthorization()
export default clerkMiddleware((auth, req) => {
  if (isProtectedRoute(req)) {
    auth.protect(); // Redirects to sign-in if unauthenticated
  }
});

export const config = {
  matcher: [
    // Skip Next.js internals and static files
    '/((?!_next|[^?]*\\.(?:html?|css|js(?!on)|jpe?g|webp|png|gif|svg|ttf|woff2?|ico|csv|docx?|xlsx?|zip|webmanifest)).*)',
    '/(api|trpc)(.*)',
  ],
};

The createRouteMatcher pattern replaces the [Authorize] attribute and route-level auth configuration you’d do in ASP.NET. Routes not in the protected list are public by default.

Clerk in Next.js: Reading Auth State

// app/dashboard/page.tsx — Server Component (runs on the server)
import { auth, currentUser } from '@clerk/nextjs/server';
import { redirect } from 'next/navigation';

export default async function DashboardPage() {
  // auth() is the server-side equivalent of HttpContext.User in ASP.NET
  const { userId, sessionClaims } = await auth();

  if (!userId) {
    redirect('/sign-in');
  }

  // currentUser() fetches the full user object from Clerk's API
  // Equivalent to: await _userManager.FindByIdAsync(userId)
  const user = await currentUser();

  return (
    <div>
      <h1>Welcome, {user?.firstName}</h1>
      <p>User ID: {userId}</p>
    </div>
  );
}
// app/components/profile-button.tsx — Client Component
'use client';
import { useUser, useAuth, SignOutButton } from '@clerk/nextjs';

export function ProfileButton() {
  // useUser() and useAuth() are React hooks for client components
  // Equivalent to injecting IHttpContextAccessor and reading User in Blazor
  const { user, isLoaded } = useUser();
  const { isSignedIn } = useAuth();

  if (!isLoaded) return <div>Loading...</div>;
  if (!isSignedIn) return null;

  return (
    <div>
      <span>{user.emailAddresses[0].emailAddress}</span>
      {/* SignOutButton handles session termination */}
      <SignOutButton>
        <button>Sign Out</button>
      </SignOutButton>
    </div>
  );
}
// app/sign-in/[[...sign-in]]/page.tsx — Sign-in page
import { SignIn } from '@clerk/nextjs';

export default function SignInPage() {
  // <SignIn /> renders Clerk's prebuilt sign-in form
  // Equivalent to an Identity-scaffolded login page
  // Handles password reset, OAuth redirects, 2FA — all built in
  return (
    <div className="flex justify-center py-12">
      <SignIn />
    </div>
  );
}

The [[...sign-in]] folder name is Next.js catch-all route syntax, needed because Clerk’s sign-in flow uses multiple URL segments for OAuth callbacks.

Calling Your NestJS API with Auth

When a Server Component or Client Component needs to call your NestJS API, it must forward the Clerk JWT:

// lib/api-client.ts — API client for Server Components
import { auth } from '@clerk/nextjs/server';

export async function fetchFromApi<T>(path: string, options?: RequestInit): Promise<T> {
  // getToken() retrieves the JWT for the current session
  // This is the equivalent of reading the bearer token from HttpContext.Request.Headers
  const { getToken } = await auth();
  const token = await getToken();

  const response = await fetch(`${process.env.API_URL}${path}`, {
    ...options,
    headers: {
      'Content-Type': 'application/json',
      Authorization: `Bearer ${token}`,  // Send JWT to your NestJS API
      ...options?.headers,
    },
  });

  if (!response.ok) {
    throw new Error(`API error: ${response.status}`);
  }

  return response.json();
}

// Usage in a Server Component
const orders = await fetchFromApi<Order[]>('/api/orders');
// For Client Components using TanStack Query, include the token in the query
'use client';
import { useAuth } from '@clerk/nextjs';
import { useQuery } from '@tanstack/react-query';

function OrdersList() {
  const { getToken } = useAuth();

  const { data: orders } = useQuery({
    queryKey: ['orders'],
    queryFn: async () => {
      const token = await getToken();
      const response = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/api/orders`, {
        headers: { Authorization: `Bearer ${token}` },
      });
      return response.json();
    },
  });

  return <ul>{orders?.map((o) => <li key={o.id}>{o.id}</li>)}</ul>;
}

NestJS Integration: Verifying Clerk JWTs

Your NestJS API receives the Clerk JWT and must verify it. Clerk signs JWTs with RS256 using a JWKS endpoint — the same mechanism as Azure AD tokens.

pnpm add @clerk/backend jwks-rsa jsonwebtoken
pnpm add -D @types/jsonwebtoken
// auth/clerk.guard.ts
import {
  Injectable,
  CanActivate,
  ExecutionContext,
  UnauthorizedException,
} from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { Reflector } from '@nestjs/core';
import { Request } from 'express';
import { createClerkClient } from '@clerk/backend';

export const IS_PUBLIC_KEY = 'isPublic';
export const Public = () => SetMetadata(IS_PUBLIC_KEY, true);

@Injectable()
export class ClerkGuard implements CanActivate {
  private clerk: ReturnType<typeof createClerkClient>;

  constructor(
    private config: ConfigService,
    private reflector: Reflector,
  ) {
    this.clerk = createClerkClient({
      secretKey: this.config.get<string>('CLERK_SECRET_KEY'),
    });
  }

  async canActivate(context: ExecutionContext): Promise<boolean> {
    // Check for @Public() decorator — skip auth for public routes
    const isPublic = this.reflector.getAllAndOverride<boolean>(IS_PUBLIC_KEY, [
      context.getHandler(),
      context.getClass(),
    ]);
    if (isPublic) return true;

    const request = context.switchToHttp().getRequest<Request>();
    const token = this.extractToken(request);

    if (!token) {
      throw new UnauthorizedException('No authentication token provided');
    }

    try {
      // Clerk verifies the JWT against their public keys
      // This is equivalent to .AddJwtBearer() validation in ASP.NET Core
      const payload = await this.clerk.verifyToken(token, {
        authorizedParties: [this.config.get<string>('FRONTEND_URL')!],
      });

      // Attach Clerk's session claims to the request
      // Equivalent to setting HttpContext.User with a ClaimsPrincipal
      request['auth'] = {
        userId: payload.sub,          // Clerk user ID — equivalent to ClaimTypes.NameIdentifier
        sessionId: payload.sid,
        metadata: payload.metadata,   // Custom metadata from Clerk
        orgId: payload.org_id,        // Organization ID (if using Clerk Organizations)
        orgRole: payload.org_role,    // 'org:admin' | 'org:member'
      };

      return true;
    } catch {
      throw new UnauthorizedException('Invalid or expired token');
    }
  }

  private extractToken(request: Request): string | null {
    const [type, token] = request.headers.authorization?.split(' ') ?? [];
    return type === 'Bearer' ? token : null;
  }
}
// auth/auth.module.ts
import { Module } from '@nestjs/common';
import { ConfigModule } from '@nestjs/config';
import { ClerkGuard } from './clerk.guard';
import { APP_GUARD } from '@nestjs/core';

@Module({
  imports: [ConfigModule],
  providers: [
    ClerkGuard,
    {
      // Register globally so all routes require auth by default
      // Equivalent to requiring [Authorize] on all controllers
      provide: APP_GUARD,
      useClass: ClerkGuard,
    },
  ],
  exports: [ClerkGuard],
})
export class AuthModule {}

Reading Auth Context in Controllers and Services

Define a type for the auth context and a custom decorator to extract it cleanly:

// auth/auth.types.ts
export interface AuthContext {
  userId: string;       // Clerk user ID (e.g., 'user_2abc...')
  sessionId: string;
  metadata: Record<string, unknown>;
  orgId?: string;       // Set if the user is acting within an organization
  orgRole?: string;     // 'org:admin' | 'org:member'
}

// auth/current-user.decorator.ts
// Custom parameter decorator — equivalent to a custom IActionResult parameter in ASP.NET
import { createParamDecorator, ExecutionContext } from '@nestjs/common';

export const CurrentUser = createParamDecorator(
  (_data: unknown, ctx: ExecutionContext): AuthContext => {
    const request = ctx.switchToHttp().getRequest();
    return request['auth']; // Set by ClerkGuard
  },
);
// orders/orders.controller.ts
import { Controller, Get, Post, Body } from '@nestjs/common';
import { CurrentUser } from '../auth/current-user.decorator';
import { AuthContext } from '../auth/auth.types';

@Controller('orders')
export class OrdersController {
  constructor(private readonly ordersService: OrdersService) {}

  @Get()
  findMyOrders(
    // @CurrentUser() is equivalent to reading User.FindFirst(ClaimTypes.NameIdentifier) in ASP.NET
    @CurrentUser() auth: AuthContext,
  ) {
    return this.ordersService.findByUser(auth.userId);
  }

  @Post()
  create(
    @CurrentUser() auth: AuthContext,
    @Body() dto: CreateOrderDto,
  ) {
    return this.ordersService.create(auth.userId, dto);
  }
}

Role-Based Authorization

Clerk supports two mechanisms for roles:

  1. User metadata — arbitrary JSON you store on the user (e.g., { role: 'admin' })
  2. Clerk Organizations — a first-class multi-tenant feature with built-in roles (org:admin, org:member)

Approach 1: Metadata-based roles (simpler, for single-tenant apps)

Set user metadata via the Clerk dashboard or API:

// In a webhook handler or admin endpoint:
await clerk.users.updateUserMetadata(userId, {
  publicMetadata: { role: 'admin' },
});

Metadata on the JWT is available in the metadata field of your AuthContext. Build a guard:

// auth/roles.guard.ts
import { Injectable, CanActivate, ExecutionContext, ForbiddenException } from '@nestjs/common';
import { Reflector } from '@nestjs/core';

export const Roles = (...roles: string[]) => SetMetadata('roles', roles);

@Injectable()
export class RolesGuard implements CanActivate {
  constructor(private reflector: Reflector) {}

  canActivate(context: ExecutionContext): boolean {
    const requiredRoles = this.reflector.getAllAndOverride<string[]>('roles', [
      context.getHandler(),
      context.getClass(),
    ]);

    if (!requiredRoles?.length) return true;

    const request = context.switchToHttp().getRequest();
    const auth: AuthContext = request['auth'];

    const userRole = (auth.metadata?.role as string) ?? 'user';
    if (!requiredRoles.includes(userRole)) {
      throw new ForbiddenException('Insufficient permissions');
    }

    return true;
  }
}

// Usage — equivalent to [Authorize(Roles = "Admin")] in ASP.NET
@Delete(':id')
@UseGuards(RolesGuard)
@Roles('admin')
deleteOrder(@Param('id', ParseIntPipe) id: number) {
  return this.ordersService.remove(id);
}

Approach 2: Clerk Organizations (for multi-tenant apps)

Clerk Organizations are the equivalent of Azure AD tenants + roles. Each organization has members with roles (org:admin or org:member). The active organization and role are included in the JWT automatically.

// auth/org-roles.guard.ts — require specific organization role
export const OrgRoles = (...roles: string[]) => SetMetadata('orgRoles', roles);

@Injectable()
export class OrgRolesGuard implements CanActivate {
  constructor(private reflector: Reflector) {}

  canActivate(context: ExecutionContext): boolean {
    const requiredRoles = this.reflector.getAllAndOverride<string[]>('orgRoles', [
      context.getHandler(),
      context.getClass(),
    ]);

    if (!requiredRoles?.length) return true;

    const request = context.switchToHttp().getRequest();
    const auth: AuthContext = request['auth'];

    if (!auth.orgId) {
      throw new ForbiddenException('Must be acting within an organization');
    }

    // org_role is 'org:admin' or 'org:member' in Clerk's format
    const userOrgRole = auth.orgRole ?? '';
    const hasRole = requiredRoles.some((r) => userOrgRole === `org:${r}`);

    if (!hasRole) {
      throw new ForbiddenException('Insufficient organization permissions');
    }

    return true;
  }
}

// Usage
@Delete(':id')
@UseGuards(OrgRolesGuard)
@OrgRoles('admin')  // Only org:admin can delete
deleteOrder(@Param('id') id: string) { /* ... */ }

Webhook Integration

Clerk sends webhooks for user lifecycle events: user.created, user.updated, user.deleted, session.created, etc. This is equivalent to handling events from an identity provider in .NET (though ASP.NET Identity doesn’t have a built-in webhook system — you’d typically handle these in IdentityOptions callbacks or custom IUserStore implementations).

Use webhooks to:

  • Sync Clerk users to your own database
  • Trigger onboarding flows on user.created
  • Clean up user data on user.deleted
  • Log security events on session.created from a new device
pnpm add svix  # Clerk uses Svix for webhook delivery and signature verification
// webhooks/webhooks.controller.ts
import { Controller, Post, Headers, Body, HttpCode, RawBodyRequest } from '@nestjs/common';
import { Request } from 'express';
import { Webhook } from 'svix';
import { ConfigService } from '@nestjs/config';
import { Public } from '../auth/current-user.decorator';

// The webhook endpoint must be public — Clerk can't authenticate with your JWT
@Controller('webhooks')
export class WebhooksController {
  constructor(
    private readonly config: ConfigService,
    private readonly usersService: UsersService,
  ) {}

  @Post('clerk')
  @Public()  // Skip JWT auth — Clerk webhook verification handles security
  @HttpCode(200)
  async handleClerkWebhook(
    @Headers('svix-id') svixId: string,
    @Headers('svix-timestamp') svixTimestamp: string,
    @Headers('svix-signature') svixSignature: string,
    @Body() body: Buffer,  // Raw body required for signature verification
  ) {
    const webhookSecret = this.config.get<string>('CLERK_WEBHOOK_SECRET');
    const wh = new Webhook(webhookSecret);

    let event: WebhookEvent;
    try {
      // Verify the webhook signature — equivalent to validating HMAC in ASP.NET webhook handlers
      event = wh.verify(body, {
        'svix-id': svixId,
        'svix-timestamp': svixTimestamp,
        'svix-signature': svixSignature,
      }) as WebhookEvent;
    } catch {
      throw new BadRequestException('Invalid webhook signature');
    }

    switch (event.type) {
      case 'user.created':
        await this.usersService.createFromClerk(event.data);
        break;
      case 'user.updated':
        await this.usersService.updateFromClerk(event.data);
        break;
      case 'user.deleted':
        await this.usersService.deleteByClerkId(event.data.id);
        break;
    }

    return { received: true };
  }
}

To receive the raw request body (required for signature verification), configure NestJS to preserve it:

// main.ts — enable raw body parsing
const app = await NestFactory.create(AppModule, { rawBody: true });

Key Differences

ConcernASP.NET Identity / Azure ADClerkNotes
User storageYour SQL Server databaseClerk’s infrastructureClerk stores user records; you store a reference
Sign-up/sign-in UIScaffolded Razor pages or customPrebuilt React/Next.js componentsClerk’s UI components are customizable via the dashboard
Password hashingASP.NET Identity handles itClerk handles itYou never see passwords
JWT issuanceYour code (or Azure AD)Clerk issues JWTsYour API only verifies, never issues
JWT verificationAddJwtBearer() with your signing keyClerk’s SDK via JWKS endpointRS256; public keys fetched automatically
Claims / PrincipalClaimsPrincipal on HttpContext.Userauth object on Request (you set this)Same concept; different object model
Roles[Authorize(Roles = "Admin")] + DB rolesUser metadata or Clerk OrganizationsOrganization roles are built-in; custom roles use metadata
[AllowAnonymous]Attribute on controller/action@Public() custom decoratorSame concept; requires custom implementation
Email verificationASP.NET Identity token flowClerk handles automaticallyNo code required on your end
Social login (OAuth)Configure in Identity + callback handlersToggle in Clerk dashboardOne-click setup in Clerk
MFASeparate configuration + TOTP libraryToggle in Clerk dashboardNo code required
Session managementCookie/JWT, your responsibilityClerk manages sessionsClerk handles rotation and revocation
User management UIAdmin pages you buildClerk Dashboard (hosted)No code required for basic user management
Webhook eventsCustom implementationBuilt-in via Svixuser.created, user.deleted, etc.
PricePart of .NET / Azure AD costsFree tier; paid above 10k MAUEvaluate for your scale

Gotchas for .NET Engineers

Gotcha 1: Clerk’s User ID Is a String, Not an Integer

ASP.NET Identity uses GUID-based user IDs by default, but many .NET projects use integer IDs. Clerk user IDs are opaque strings in the format user_2abc123.... If you’re storing references to users in your database, your schema needs a String type (or varchar) for clerkId, not an integer.

// schema.prisma — correct approach
model Order {
  id        Int    @id @default(autoincrement())
  clerkId   String                        // Clerk user ID — String, not Int
  // ...
}

// WRONG: don't try to parse the Clerk ID as a number
const userId = parseInt(auth.userId); // NaN — will silently fail

If your existing database uses integer user IDs and you’re migrating to Clerk, you need an intermediate table:

model UserProfile {
  id        Int    @id @default(autoincrement())
  clerkId   String @unique   // Clerk's ID
  // Your existing integer-keyed data can reference this table
}

Gotcha 2: JWT Metadata Is Cached — Updates Are Not Instant

When you update user metadata via the Clerk API, the change is not reflected in the JWT immediately. JWTs have a lifespan (default: 60 seconds for Clerk’s short-lived tokens). Until the token expires and is refreshed, old metadata will appear in claims.

For frequently-changing authorization data (e.g., subscription status, feature flags), do not rely solely on JWT metadata. Instead, verify against your own database in the guard or service:

// For frequently changing state: verify against your DB, not just the JWT
@Injectable()
export class SubscriptionGuard implements CanActivate {
  constructor(private readonly subscriptionService: SubscriptionService) {}

  async canActivate(context: ExecutionContext): Promise<boolean> {
    const request = context.switchToHttp().getRequest();
    const auth: AuthContext = request['auth'];

    // Don't trust JWT metadata for subscription status — check the source of truth
    const isSubscribed = await this.subscriptionService.isActive(auth.userId);
    if (!isSubscribed) {
      throw new ForbiddenException('Active subscription required');
    }

    return true;
  }
}

For slow-changing data (e.g., user role in a multi-tenant app), JWT metadata is fine.

Gotcha 3: Webhook Endpoints Must Not Require JWT Authentication

This trips up .NET engineers who reflexively apply [Authorize] to all POST endpoints. Clerk’s webhook delivery has no way to obtain your JWT — it authenticates using its own HMAC signature scheme (via Svix). If your webhook endpoint requires JWT auth, Clerk’s requests will return 401 and webhooks will fail silently.

The pattern is: mark the webhook endpoint with @Public() (your JWT guard skips it), and rely on Svix signature verification for security. Never skip signature verification — an unprotected webhook endpoint is an unauthenticated POST endpoint that can execute code in your system.

@Post('clerk')
@Public()  // Required: skip JWT auth
async handleWebhook(/* ... */) {
  // Must verify Svix signature before processing
  try {
    event = wh.verify(body, headers);
  } catch {
    throw new BadRequestException('Invalid signature'); // Always reject unsigned requests
  }
  // ... safe to process now
}

Gotcha 4: The Frontend Needs the Publishable Key; the Backend Needs the Secret Key

Clerk has two keys with different security properties:

  • NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY — safe to expose to the browser (it’s designed to be public). This is how the Clerk SDK initializes in the browser.
  • CLERK_SECRET_KEY — must stay server-side only. Used by your NestJS API to verify tokens. Never expose it to the frontend or commit it to version control.
# .env (Next.js)
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=pk_test_...   # Goes to the browser — safe
CLERK_SECRET_KEY=sk_test_...                    # Server-only — never expose

# .env (NestJS)
CLERK_SECRET_KEY=sk_test_...                    # Same key; NestJS needs it to verify tokens
CLERK_WEBHOOK_SECRET=whsec_...                  # For webhook signature verification

# .env.example (committed to git — shows required variables, no values)
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=
CLERK_SECRET_KEY=
CLERK_WEBHOOK_SECRET=

In ASP.NET Core, the equivalent mistake is committing the JWT signing key in appsettings.json. Clerk’s separation of publishable vs. secret key makes this harder to get wrong, but you still need discipline with the secret key.

Gotcha 5: Organization Context Must Be Explicitly Set by the Frontend

Clerk supports multiple organizations per user (like Azure AD where a user can be a member of multiple tenants). When a user is a member of multiple organizations, your frontend must explicitly specify which organization the user is “acting as” — Clerk calls this the “active organization.”

If your backend checks auth.orgId and it’s undefined, the user may have organizations but hasn’t set one as active. Handle this explicitly:

// Frontend — set the active organization on sign-in or when switching orgs
'use client';
import { useOrganization, useOrganizationList } from '@clerk/nextjs';

function OrgSwitcher() {
  const { organization } = useOrganization();
  const { setActive, userMemberships } = useOrganizationList();

  return (
    <select
      value={organization?.id}
      onChange={(e) => setActive({ organization: e.target.value })}
    >
      {userMemberships.data?.map((membership) => (
        <option key={membership.organization.id} value={membership.organization.id}>
          {membership.organization.name}
        </option>
      ))}
    </select>
  );
}
// Backend — handle missing org context gracefully
canActivate(context: ExecutionContext): boolean {
  const auth: AuthContext = context.switchToHttp().getRequest()['auth'];

  if (!auth.orgId) {
    // User is authenticated but has no active organization
    // Either they haven't selected one, or they're a personal account user
    throw new ForbiddenException(
      'Please select an organization to continue',
    );
  }
  // ...
}

Hands-On Exercise

Build the complete auth flow for a multi-tenant task management API.

Setup:

  1. Create a Clerk application with email/password and Google sign-in enabled
  2. Create a Next.js app with the Clerk middleware configured
  3. Create a NestJS API with the Clerk guard registered globally

Implement:

  1. Next.js pages:

    • /sign-in — Clerk <SignIn /> component
    • /sign-up — Clerk <SignUp /> component
    • /dashboard — Protected page showing the current user’s name and email
    • / — Public landing page with a sign-in link
  2. NestJS endpoints:

    • GET /api/me — Returns { userId, email } from the JWT claims. Requires auth.
    • GET /api/tasks — Returns tasks for the authenticated user. Requires auth.
    • POST /api/tasks — Creates a task for the authenticated user. Requires auth.
    • DELETE /api/tasks/:id — Deletes a task. Requires auth + ownership check (only the task owner can delete).
    • GET /api/health — Public health check. No auth required.
  3. Webhook handler:

    • POST /api/webhooks/clerk — Public endpoint with Svix verification
    • On user.created: log "New user: {userId}" to the console
    • On user.deleted: log "Deleted user: {userId}" to the console
  4. Test the full flow:

    • Sign up as a new user
    • Verify /api/health returns 200 without auth
    • Verify /api/tasks returns 401 without a token
    • From the Next.js dashboard, call /api/tasks with the Clerk token — verify it returns 200
    • Create a task, then attempt to delete another user’s task — verify it returns 403

Quick Reference

Clerk SDK Packages

PackageUsed InPurpose
@clerk/nextjsNext.js frontendClerkProvider, auth(), useUser(), useAuth(), components
@clerk/backendNestJS APIcreateClerkClient(), verifyToken(), user management API
svixNestJS webhook handlerWebhook signature verification

Auth State — Where to Read It

ContextHow to Get the UserEquivalent in .NET
Next.js Server Componentconst { userId } = await auth()HttpContext.User via IHttpContextAccessor
Next.js Client Componentconst { user } = useUser()Injected via Blazor or JS interop
Next.js middlewareauth.protect()UseAuthorization()
NestJS controller/service@CurrentUser() auth: AuthContextUser.FindFirst(...) from HttpContext.User

Common Clerk Patterns

// Protect all routes, allow specific ones to be public (Next.js middleware)
clerkMiddleware((auth, req) => {
  if (!isPublicRoute(req)) auth.protect();
});

// Get current user ID in a Server Component
const { userId } = await auth();

// Get full user object (makes an API call — use sparingly)
const user = await currentUser();

// Custom @Public() decorator (NestJS)
export const IS_PUBLIC_KEY = 'isPublic';
export const Public = () => SetMetadata(IS_PUBLIC_KEY, true);

// Read in ClerkGuard
const isPublic = this.reflector.getAllAndOverride<boolean>(IS_PUBLIC_KEY, [
  context.getHandler(),
  context.getClass(),
]);
if (isPublic) return true;

// Get token for API calls (Client Component)
const { getToken } = useAuth();
const token = await getToken();
headers: { Authorization: `Bearer ${token}` }

// Get token for API calls (Server Component)
const { getToken } = await auth();
const token = await getToken();
headers: { Authorization: `Bearer ${token}` }

Environment Variable Checklist

# Next.js (.env.local)
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=pk_test_...   # Required — Clerk frontend SDK
CLERK_SECRET_KEY=sk_test_...                    # Required — Clerk backend verification
NEXT_PUBLIC_CLERK_SIGN_IN_URL=/sign-in          # Optional — customize sign-in URL
NEXT_PUBLIC_CLERK_SIGN_UP_URL=/sign-up          # Optional — customize sign-up URL

# NestJS (.env)
CLERK_SECRET_KEY=sk_test_...                    # Same key as Next.js backend
CLERK_WEBHOOK_SECRET=whsec_...                  # From Clerk Dashboard > Webhooks
FRONTEND_URL=http://localhost:3000              # For JWT authorizedParties validation

Further Reading

API Design: REST, DTOs, and Swagger in NestJS

For .NET engineers who know: ASP.NET Core controllers, model binding, data annotations, Swashbuckle, and [ApiController] You’ll learn: How NestJS maps every ASP.NET Core API pattern — DTOs, validation, Swagger, versioning, pagination — to its own decorator-based system, and where the two diverge in ways that will catch you off guard Time: 15-20 min read


The .NET Way (What You Already Know)

In ASP.NET Core you define a controller, decorate it with attributes, and the framework handles model binding, validation, and OpenAPI documentation. A typical endpoint looks like this:

[ApiController]
[Route("api/v1/[controller]")]
[Produces("application/json")]
public class ProductsController : ControllerBase
{
    private readonly IProductService _productService;

    public ProductsController(IProductService productService)
    {
        _productService = productService;
    }

    /// <summary>Get a paginated list of products.</summary>
    [HttpGet]
    [ProducesResponseType(typeof(PagedResult<ProductDto>), StatusCodes.Status200OK)]
    [ProducesResponseType(StatusCodes.Status400BadRequest)]
    public async Task<IActionResult> GetProducts([FromQuery] ProductQueryDto query)
    {
        var result = await _productService.GetProductsAsync(query);
        return Ok(result);
    }

    [HttpPost]
    [ProducesResponseType(typeof(ProductDto), StatusCodes.Status201Created)]
    [ProducesResponseType(typeof(ValidationProblemDetails), StatusCodes.Status422UnprocessableEntity)]
    public async Task<IActionResult> CreateProduct([FromBody] CreateProductDto dto)
    {
        var product = await _productService.CreateAsync(dto);
        return CreatedAtAction(nameof(GetProduct), new { id = product.Id }, product);
    }
}

Validation lives on the DTO via data annotations, and model state is checked automatically by [ApiController]:

public class CreateProductDto
{
    [Required]
    [MaxLength(200)]
    public string Name { get; set; }

    [Required]
    [Range(0.01, double.MaxValue, ErrorMessage = "Price must be positive")]
    public decimal Price { get; set; }

    [MaxLength(1000)]
    public string? Description { get; set; }
}

Swashbuckle generates OpenAPI from the XML doc comments, [ProducesResponseType] attributes, and DTO property types — all without a separate spec file.


The NestJS Way

NestJS is architecturally similar: controllers handle routing, DTOs carry data, providers hold business logic, and a decorator on the DTO class drives both validation and OpenAPI documentation. The tooling is @nestjs/swagger (wraps Swagger UI) and class-validator / class-transformer for runtime validation.

Project Setup

npm install @nestjs/swagger swagger-ui-express
npm install class-validator class-transformer

Enable validation globally in main.ts:

// main.ts
import { NestFactory } from '@nestjs/core';
import { ValidationPipe } from '@nestjs/common';
import { SwaggerModule, DocumentBuilder } from '@nestjs/swagger';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);

  // Global validation pipe — equivalent to [ApiController] auto-ModelState checking
  app.useGlobalPipes(
    new ValidationPipe({
      whitelist: true,           // Strip properties not in DTO (like [BindNever])
      forbidNonWhitelisted: true, // Throw on unknown properties
      transform: true,           // Auto-transform payload types (string -> number etc.)
      transformOptions: {
        enableImplicitConversion: true,
      },
    }),
  );

  // OpenAPI / Swagger setup
  const config = new DocumentBuilder()
    .setTitle('Products API')
    .setDescription('Product catalog service')
    .setVersion('1.0')
    .addBearerAuth()             // JWT auth header in Swagger UI
    .build();

  const document = SwaggerModule.createDocument(app, config);
  SwaggerModule.setup('api/docs', app, document); // Serves at /api/docs

  await app.listen(3000);
}
bootstrap();

Defining a DTO with Validation and Swagger Decorators

In NestJS, a single class carries three responsibilities: it is the DTO shape, the validation spec, and the OpenAPI schema. The same class does what [Required], [MaxLength], [ProducesResponseType], and the XML doc comment did separately in .NET:

// dto/create-product.dto.ts
import { ApiProperty, ApiPropertyOptional } from '@nestjs/swagger';
import {
  IsString,
  IsNotEmpty,
  MaxLength,
  IsNumber,
  IsPositive,
  IsOptional,
  Min,
  IsEnum,
} from 'class-validator';

export enum ProductCategory {
  Electronics = 'electronics',
  Clothing = 'clothing',
  Books = 'books',
}

export class CreateProductDto {
  @ApiProperty({
    description: 'Product display name',
    example: 'Wireless Keyboard',
    maxLength: 200,
  })
  @IsString()
  @IsNotEmpty()
  @MaxLength(200)
  name: string;

  @ApiProperty({
    description: 'Price in USD cents (integer to avoid float precision issues)',
    example: 4999,
    minimum: 1,
  })
  @IsNumber()
  @IsPositive()
  @Min(1)
  priceInCents: number;

  @ApiPropertyOptional({
    description: 'Detailed product description',
    maxLength: 1000,
  })
  @IsOptional()
  @IsString()
  @MaxLength(1000)
  description?: string;

  @ApiProperty({ enum: ProductCategory })
  @IsEnum(ProductCategory)
  category: ProductCategory;
}
// dto/update-product.dto.ts
import { PartialType } from '@nestjs/swagger';
import { CreateProductDto } from './create-product.dto';

// PartialType makes all properties optional AND preserves swagger + validation decorators.
// Equivalent to a partial update DTO in C# — but with zero duplication.
export class UpdateProductDto extends PartialType(CreateProductDto) {}

PartialType from @nestjs/swagger (not the one from @nestjs/mapped-types) generates OpenAPI correctly for partial updates. This is the NestJS equivalent of writing a separate PatchProductDto with all optional properties — but done in one line.

The Controller

// products.controller.ts
import {
  Controller,
  Get,
  Post,
  Put,
  Delete,
  Body,
  Param,
  Query,
  HttpCode,
  HttpStatus,
  NotFoundException,
} from '@nestjs/common';
import {
  ApiTags,
  ApiOperation,
  ApiResponse,
  ApiParam,
  ApiBearerAuth,
} from '@nestjs/swagger';
import { ProductsService } from './products.service';
import { CreateProductDto } from './dto/create-product.dto';
import { UpdateProductDto } from './dto/update-product.dto';
import { ProductQueryDto } from './dto/product-query.dto';
import { ProductDto } from './dto/product.dto';
import { PagedResult } from '../common/paged-result';

@ApiTags('Products')          // Groups endpoints in Swagger UI — like [ApiExplorerSettings]
@ApiBearerAuth()              // JWT required — like [Authorize]
@Controller('products')       // Route prefix — like [Route("products")]
export class ProductsController {
  constructor(private readonly productsService: ProductsService) {}

  @Get()
  @ApiOperation({ summary: 'List products with pagination' })
  @ApiResponse({ status: 200, description: 'Paginated list', type: PagedResult<ProductDto> })
  @ApiResponse({ status: 400, description: 'Invalid query parameters' })
  async getProducts(
    @Query() query: ProductQueryDto,
  ): Promise<PagedResult<ProductDto>> {
    return this.productsService.findAll(query);
  }

  @Get(':id')
  @ApiParam({ name: 'id', description: 'Product UUID' })
  @ApiResponse({ status: 200, type: ProductDto })
  @ApiResponse({ status: 404, description: 'Product not found' })
  async getProduct(@Param('id') id: string): Promise<ProductDto> {
    const product = await this.productsService.findOne(id);
    if (!product) {
      throw new NotFoundException(`Product ${id} not found`);
    }
    return product;
  }

  @Post()
  @HttpCode(HttpStatus.CREATED)  // 201 — like return CreatedAtAction(...)
  @ApiOperation({ summary: 'Create a product' })
  @ApiResponse({ status: 201, type: ProductDto })
  @ApiResponse({ status: 422, description: 'Validation failed' })
  async createProduct(@Body() dto: CreateProductDto): Promise<ProductDto> {
    return this.productsService.create(dto);
  }

  @Put(':id')
  @ApiResponse({ status: 200, type: ProductDto })
  async updateProduct(
    @Param('id') id: string,
    @Body() dto: UpdateProductDto,
  ): Promise<ProductDto> {
    return this.productsService.update(id, dto);
  }

  @Delete(':id')
  @HttpCode(HttpStatus.NO_CONTENT)  // 204
  async deleteProduct(@Param('id') id: string): Promise<void> {
    await this.productsService.remove(id);
  }
}

Query Parameters with Validation

Query parameters are a common friction point. NestJS handles them via a query DTO combined with @Query():

// dto/product-query.dto.ts
import { ApiPropertyOptional } from '@nestjs/swagger';
import { IsOptional, IsInt, Min, Max, IsString, IsEnum } from 'class-validator';
import { Type } from 'class-transformer';
import { ProductCategory } from './create-product.dto';

export class ProductQueryDto {
  @ApiPropertyOptional({ default: 1, minimum: 1 })
  @IsOptional()
  @IsInt()
  @Min(1)
  @Type(() => Number) // Query params arrive as strings — this coerces them
  page: number = 1;

  @ApiPropertyOptional({ default: 20, minimum: 1, maximum: 100 })
  @IsOptional()
  @IsInt()
  @Min(1)
  @Max(100)
  @Type(() => Number)
  pageSize: number = 20;

  @ApiPropertyOptional()
  @IsOptional()
  @IsString()
  search?: string;

  @ApiPropertyOptional({ enum: ProductCategory })
  @IsOptional()
  @IsEnum(ProductCategory)
  category?: ProductCategory;
}

Standard Response Envelope and Pagination

Define a reusable paginated response type once, and reference it everywhere:

// common/paged-result.ts
import { ApiProperty } from '@nestjs/swagger';

export class PagedResult<T> {
  @ApiProperty({ isArray: true })
  data: T[];

  @ApiProperty()
  totalCount: number;

  @ApiProperty()
  page: number;

  @ApiProperty()
  pageSize: number;

  @ApiProperty()
  totalPages: number;

  constructor(data: T[], totalCount: number, page: number, pageSize: number) {
    this.data = data;
    this.totalCount = totalCount;
    this.page = page;
    this.pageSize = pageSize;
    this.totalPages = Math.ceil(totalCount / pageSize);
  }
}

// common/api-response.ts — Standard envelope for single-resource responses
export class ApiResponseEnvelope<T> {
  @ApiProperty()
  success: boolean = true;

  data: T;

  @ApiProperty({ required: false })
  message?: string;
}

Standard Error Format

NestJS has built-in exception filters. The default error format differs from ASP.NET Core’s ProblemDetails. You can customize it to match RFC 7807 (ProblemDetails) if your clients expect that format:

// filters/http-exception.filter.ts
import {
  ExceptionFilter,
  Catch,
  ArgumentsHost,
  HttpException,
  HttpStatus,
} from '@nestjs/common';
import { Request, Response } from 'express';

@Catch(HttpException)
export class HttpExceptionFilter implements ExceptionFilter {
  catch(exception: HttpException, host: ArgumentsHost) {
    const ctx = host.switchToHttp();
    const response = ctx.getResponse<Response>();
    const request = ctx.getRequest<Request>();
    const status = exception.getStatus();
    const exceptionResponse = exception.getResponse();

    // Emit RFC 7807 ProblemDetails format — mirrors ASP.NET Core default
    const problemDetails = {
      type: `https://httpstatuses.com/${status}`,
      title: HttpStatus[status] ?? 'Error',
      status,
      detail:
        typeof exceptionResponse === 'string'
          ? exceptionResponse
          : (exceptionResponse as any).message,
      instance: request.url,
      traceId: request.headers['x-request-id'] ?? crypto.randomUUID(),
    };

    response.status(status).json(problemDetails);
  }
}

Register it globally in main.ts:

app.useGlobalFilters(new HttpExceptionFilter());

API Versioning

NestJS supports URI versioning, header versioning, and media-type versioning via @nestjs/versioning:

// main.ts — enable URI versioning
import { VersioningType } from '@nestjs/common';

app.enableVersioning({
  type: VersioningType.URI, // Routes become /v1/products, /v2/products
  defaultVersion: '1',
});
// products-v2.controller.ts
import { Controller, Version } from '@nestjs/common';

@Controller('products')
@Version('2')            // This controller handles /v2/products
export class ProductsV2Controller {
  // V2-specific endpoints
}

Generating the OpenAPI Spec

Export the spec to a file for client code generation:

// scripts/generate-openapi.ts
import { NestFactory } from '@nestjs/core';
import { DocumentBuilder, SwaggerModule } from '@nestjs/swagger';
import { writeFileSync } from 'fs';
import { AppModule } from '../src/app.module';

async function generate() {
  const app = await NestFactory.create(AppModule, { logger: false });

  const config = new DocumentBuilder()
    .setTitle('Products API')
    .setVersion('1.0')
    .build();

  const document = SwaggerModule.createDocument(app, config);
  writeFileSync('./openapi.json', JSON.stringify(document, null, 2));
  await app.close();
  console.log('OpenAPI spec written to openapi.json');
}

generate();

Generate TypeScript client code from the spec using openapi-typescript:

npx openapi-typescript ./openapi.json -o ./src/generated/api-types.ts

Zod as an Alternative to class-validator

class-validator is the default NestJS validation approach, but Zod is increasingly preferred because it produces types directly from schemas (no duplication, no decorator boilerplate). Use nestjs-zod to integrate:

// dto/create-product.zod.ts
import { z } from 'zod';
import { createZodDto } from 'nestjs-zod';

const CreateProductSchema = z.object({
  name: z.string().min(1).max(200),
  priceInCents: z.number().int().positive(),
  description: z.string().max(1000).optional(),
  category: z.enum(['electronics', 'clothing', 'books']),
});

// This generates a class that ValidationPipe understands AND Swagger can inspect
export class CreateProductDto extends createZodDto(CreateProductSchema) {}

// Infer the plain type if needed
export type CreateProduct = z.infer<typeof CreateProductSchema>;

The trade-off: nestjs-zod requires replacing ValidationPipe with ZodValidationPipe and adding a custom Swagger plugin. For greenfield projects, this is worth it. For teams already invested in class-validator, the decorator approach is fine.


Key Differences

ConceptASP.NET CoreNestJS
Route attribute[Route("api/[controller]")]@Controller('products')
HTTP method[HttpGet], [HttpPost]@Get(), @Post()
Route parameter[FromRoute] / parameter name@Param('id')
Query string[FromQuery]@Query() or @Query('name')
Request body[FromBody]@Body()
ValidationData annotations on DTOclass-validator decorators on DTO
Auto-validation[ApiController]ValidationPipe global pipe
Status codereturn StatusCode(201, ...)@HttpCode(HttpStatus.CREATED)
Swagger setupSwashbuckle NuGet + AddSwaggerGen()@nestjs/swagger + DocumentBuilder
Swagger group[ApiExplorerSettings(GroupName="")]@ApiTags('Group')
Swagger descriptionXML doc comment /// <summary>@ApiOperation({ summary: '' })
Response type[ProducesResponseType(typeof(T), 200)]@ApiResponse({ status: 200, type: T })
Error formatProblemDetails (RFC 7807) by defaultCustom — use an ExceptionFilter
Partial update DTOWrite PatchDto with all optional propsPartialType(CreateDto) — zero duplication
VersioningAddApiVersioning() + [ApiVersion("1")]enableVersioning() + @Version('1')

Gotchas for .NET Engineers

1. Validation decorators do nothing without the global ValidationPipe

In ASP.NET Core, [ApiController] activates model validation automatically. In NestJS, the class-validator decorators on your DTO are just metadata — they do nothing unless you register ValidationPipe. If you forget to add it in main.ts, invalid requests pass straight through to your service with no error and no warning.

// Without this, @IsString(), @IsNotEmpty() etc. are ignored entirely
app.useGlobalPipes(new ValidationPipe({ whitelist: true, transform: true }));

Always add this to main.ts before anything else. The whitelist: true option is the equivalent of [BindNever] — it strips unknown properties from the incoming body rather than silently passing them through.

2. transform: true is required for query parameter type coercion — but it has side effects

Query parameters arrive as strings. Without transform: true in ValidationPipe, a @IsInt() check on page will always fail because "1" is a string, not an integer. Adding transform: true fixes this but also means NestJS will attempt to convert class instances, which can produce surprising behaviour if you have constructors with logic.

// Also requires @Type(() => Number) on the property for reliable coercion
@ApiPropertyOptional({ default: 1 })
@IsOptional()
@IsInt()
@Min(1)
@Type(() => Number)    // Without this, transform: true alone is unreliable
page: number = 1;

The @Type(() => Number) decorator from class-transformer is what actually drives the conversion. The transform: true flag in ValidationPipe enables class-transformer to run at all.

3. PartialType must come from @nestjs/swagger, not @nestjs/mapped-types

NestJS ships PartialType in two packages. The one in @nestjs/mapped-types works for validation but does not emit OpenAPI metadata. The one in @nestjs/swagger does both. Import from the wrong package and your partial update DTOs will be invisible in the Swagger UI.

// Wrong — loses Swagger decorators
import { PartialType } from '@nestjs/mapped-types';

// Correct — preserves Swagger AND validation decorators
import { PartialType } from '@nestjs/swagger';

4. The default error response format is not RFC 7807

ASP.NET Core returns ProblemDetails by default when [ApiController] is active. NestJS returns a simpler JSON structure:

{
  "statusCode": 400,
  "message": ["name must not be empty"],
  "error": "Bad Request"
}

If your API clients or monitoring tools expect RFC 7807, you need to write a custom ExceptionFilter (as shown above). This is a one-time setup, but it is easy to forget and will break clients that parse type and title fields from problem details.

5. @ApiResponse({ type: PagedResult<ProductDto> }) loses generic type information

TypeScript generics are erased at runtime. When NestJS introspects PagedResult<ProductDto> for OpenAPI generation, it sees PagedResult — the <ProductDto> part is gone. To make Swagger show the correct data array type, you need to either use getSchemaPath with refs, or create concrete subclasses:

// Option A: concrete subclass (verbose but reliable)
export class ProductPagedResult extends PagedResult<ProductDto> {
  @ApiProperty({ type: [ProductDto] })
  declare data: ProductDto[];
}

// Option B: inline schema reference (more flexible)
@ApiResponse({
  schema: {
    allOf: [
      { $ref: getSchemaPath(PagedResult) },
      {
        properties: {
          data: { type: 'array', items: { $ref: getSchemaPath(ProductDto) } },
        },
      },
    ],
  },
})

6. class-validator does not validate nested objects by default

If your DTO contains a nested object, you must add @ValidateNested() and @Type(() => NestedClass) explicitly. Without these, the nested object passes validation even if its properties are invalid.

import { ValidateNested, IsArray } from 'class-validator';
import { Type } from 'class-transformer';

export class OrderDto {
  @IsArray()
  @ValidateNested({ each: true })  // Validate each element in the array
  @Type(() => LineItemDto)          // class-transformer must know the concrete type
  lineItems: LineItemDto[];
}

Hands-On Exercise

You have an ASP.NET Core orders API. Translate it to NestJS.

Requirements:

  1. Create a CreateOrderDto with:

    • customerId (UUID string, required)
    • lineItems (array of LineItemDto, minimum 1 item, nested validation)
    • shippingAddress (nested AddressDto, required)
    • notes (string, optional, max 500 chars)
  2. Create LineItemDto with productId (UUID), quantity (integer, 1-999), unitPriceInCents (positive integer).

  3. Create AddressDto with street, city, country (all required strings), postalCode (optional string).

  4. Create UpdateOrderDto using PartialType from @nestjs/swagger.

  5. Create OrderQueryDto for query parameters: page, pageSize, status (enum: pending | confirmed | shipped | delivered | cancelled), customerId.

  6. Create OrdersController with GET, GET :id, POST, PUT :id, DELETE :id endpoints. Include appropriate @ApiResponse decorators on each method.

  7. Register ValidationPipe with whitelist: true, transform: true, and forbidNonWhitelisted: true.

  8. Write a custom ExceptionFilter that converts NotFoundException to an RFC 7807 response with status: 404.

Verification: Hit the /api/docs route in your browser. Every DTO property should appear with its type, description, and example. Send a POST body with a missing customerId — you should receive a 422 with a message array listing the violated constraints.


Quick Reference

TaskASP.NET CoreNestJS
Mark controller[ApiController](implicit — ValidationPipe does this)
Route prefix[Route("api/[controller]")]@Controller('products')
GET endpoint[HttpGet("{id}")]@Get(':id')
POST endpoint[HttpPost]@Post()
Set status codereturn StatusCode(201, obj)@HttpCode(HttpStatus.CREATED) + return value
Read body[FromBody] CreateDto dto@Body() dto: CreateDto
Read query param[FromQuery] string search@Query('search') search: string
Read route param[FromRoute] string id@Param('id') id: string
Required string[Required]@IsString() @IsNotEmpty()
Max length[MaxLength(200)]@MaxLength(200)
Numeric range[Range(1, 100)]@Min(1) @Max(100)
Optional propertypublic string? Notes@IsOptional() + notes?: string
Enum validation[EnumDataType(typeof(Status))]@IsEnum(Status)
Nested validationAutomatic@ValidateNested() @Type(() => Nested)
Partial update DTOWrite separate classPartialType(CreateDto) from @nestjs/swagger
Query string coercionAutomatic@Type(() => Number) + transform: true
Swagger groupXML comment + config@ApiTags('Group')
Swagger summary/// <summary>text</summary>@ApiOperation({ summary: 'text' })
Swagger propertyAutomatic from type@ApiProperty({ description: '', example: '' })
Optional swagger propAutomatic from ?@ApiPropertyOptional()
Response type[ProducesResponseType(typeof(T), 200)]@ApiResponse({ status: 200, type: T })
404 responsereturn NotFound()throw new NotFoundException('msg')
422 validation errorAutomatic with [ApiController]Automatic with ValidationPipe
Serve Swagger UIapp.UseSwaggerUI()SwaggerModule.setup('api/docs', app, doc)
Export specCLI toolwriteFileSync from SwaggerModule.createDocument
API versioningAddApiVersioning()enableVersioning({ type: VersioningType.URI })

Decorator import sources:

// Routing and HTTP
import { Controller, Get, Post, Put, Delete, Body, Param, Query, HttpCode, HttpStatus } from '@nestjs/common';

// Swagger / OpenAPI
import { ApiTags, ApiOperation, ApiResponse, ApiProperty, ApiPropertyOptional, ApiBearerAuth, PartialType } from '@nestjs/swagger';

// Validation
import { IsString, IsNotEmpty, IsInt, IsOptional, IsEnum, IsArray, ValidateNested, Min, Max, MaxLength } from 'class-validator';
import { Type } from 'class-transformer';

Further Reading

Real-time Communication: SignalR vs. WebSockets in NestJS

For .NET engineers who know: SignalR Hubs, groups, Clients.All, Clients.Group(), and the IHubContext<T> service You’ll learn: How NestJS WebSocket Gateways map to SignalR Hubs, where Socket.io diverges from the SignalR mental model, and how to scale real-time connections with a Redis adapter Time: 15-20 min read


The .NET Way (What You Already Know)

SignalR abstracts over WebSockets (and falls back to Server-Sent Events or long-polling automatically). You define a Hub class, and clients call server methods and receive server-pushed events through a strongly-typed contract:

// Server — SignalR Hub
public class ChatHub : Hub
{
    // Clients call this method — like an RPC call
    public async Task SendMessage(string roomId, string message)
    {
        // Push to all clients in the group
        await Clients.Group(roomId).SendAsync("MessageReceived", new
        {
            User = Context.User?.Identity?.Name,
            Message = message,
            Timestamp = DateTimeOffset.UtcNow
        });
    }

    public async Task JoinRoom(string roomId)
    {
        await Groups.AddToGroupAsync(Context.ConnectionId, roomId);
        await Clients.Group(roomId).SendAsync("UserJoined", Context.ConnectionId);
    }

    public override async Task OnDisconnectedAsync(Exception? exception)
    {
        // Cleanup — Hub handles reconnection lifecycle automatically
        await base.OnDisconnectedAsync(exception);
    }
}

// Inject into a controller or service to push from outside the Hub
public class NotificationService
{
    private readonly IHubContext<ChatHub> _hubContext;

    public NotificationService(IHubContext<ChatHub> hubContext)
    {
        _hubContext = hubContext;
    }

    public async Task NotifyRoom(string roomId, string message)
    {
        await _hubContext.Clients.Group(roomId).SendAsync("SystemMessage", message);
    }
}

SignalR gives you: automatic transport negotiation (WebSocket → SSE → long-poll), automatic reconnection on the client, strongly typed hubs, groups, and a Redis backplane for multi-server deployments — all with minimal configuration.


The NestJS Way

NestJS provides WebSocket support through @WebSocketGateway(). The default adapter uses the native Node.js ws library (bare WebSocket), but the most common choice is Socket.io via @nestjs/platform-socket.io. Socket.io brings rooms (equivalent to SignalR Groups), reconnection, and event-based messaging that is conceptually close to SignalR — though the protocol is different and the clients are not interchangeable.

Installation

# Socket.io adapter (most common — closest to SignalR feature parity)
npm install @nestjs/websockets @nestjs/platform-socket.io socket.io

# Redis adapter for scaling (equivalent to SignalR Redis backplane)
npm install @socket.io/redis-adapter ioredis

# Client-side
npm install socket.io-client

Defining a Gateway (the Hub Equivalent)

// chat.gateway.ts
import {
  WebSocketGateway,
  WebSocketServer,
  SubscribeMessage,
  MessageBody,
  ConnectedSocket,
  OnGatewayConnection,
  OnGatewayDisconnect,
  OnGatewayInit,
} from '@nestjs/websockets';
import { Server, Socket } from 'socket.io';
import { Logger, UseGuards } from '@nestjs/common';
import { ChatService } from './chat.service';
import { SendMessageDto } from './dto/send-message.dto';
import { WsJwtGuard } from '../auth/ws-jwt.guard';

// @WebSocketGateway() = SignalR Hub
// cors: { origin: '*' } — tighten this in production
@WebSocketGateway({
  namespace: '/chat',       // Socket.io namespace — like a separate Hub endpoint
  cors: { origin: '*' },
  transports: ['websocket'], // Force WebSocket only — omit to allow polling fallback
})
export class ChatGateway
  implements OnGatewayInit, OnGatewayConnection, OnGatewayDisconnect
{
  @WebSocketServer()
  server: Server;            // The Socket.io Server instance — use to push from this class

  private readonly logger = new Logger(ChatGateway.name);

  constructor(private readonly chatService: ChatService) {}

  afterInit(server: Server) {
    this.logger.log('WebSocket Gateway initialized');
  }

  // Equivalent to Hub.OnConnectedAsync()
  handleConnection(client: Socket) {
    const userId = client.handshake.auth.userId as string;
    this.logger.log(`Client connected: ${client.id}, userId: ${userId}`);
    client.data.userId = userId; // Store on the socket for later use
  }

  // Equivalent to Hub.OnDisconnectedAsync()
  handleDisconnect(client: Socket) {
    this.logger.log(`Client disconnected: ${client.id}`);
  }

  // @SubscribeMessage('eventName') = a Hub method that clients call
  // Equivalent to: public async Task SendMessage(string roomId, string message)
  @SubscribeMessage('sendMessage')
  @UseGuards(WsJwtGuard)
  async handleSendMessage(
    @MessageBody() dto: SendMessageDto,
    @ConnectedSocket() client: Socket,
  ): Promise<void> {
    const message = await this.chatService.saveMessage({
      roomId: dto.roomId,
      content: dto.content,
      userId: client.data.userId as string,
    });

    // Emit to all clients in the room — equivalent to Clients.Group(roomId).SendAsync(...)
    this.server.to(dto.roomId).emit('messageReceived', {
      id: message.id,
      content: message.content,
      userId: message.userId,
      timestamp: message.createdAt.toISOString(),
    });
  }

  // Equivalent to Groups.AddToGroupAsync()
  @SubscribeMessage('joinRoom')
  async handleJoinRoom(
    @MessageBody() data: { roomId: string },
    @ConnectedSocket() client: Socket,
  ): Promise<void> {
    await client.join(data.roomId);    // Socket.io room join — equivalent to SignalR Group

    // Notify others in the room
    client.to(data.roomId).emit('userJoined', {
      userId: client.data.userId,
      roomId: data.roomId,
    });

    // Acknowledge back to the joining client
    client.emit('joinedRoom', { roomId: data.roomId });
  }

  @SubscribeMessage('leaveRoom')
  async handleLeaveRoom(
    @MessageBody() data: { roomId: string },
    @ConnectedSocket() client: Socket,
  ): Promise<void> {
    await client.leave(data.roomId);
    client.to(data.roomId).emit('userLeft', { userId: client.data.userId });
  }
}

DTOs for WebSocket Messages

WebSocket message bodies can be validated with the same class-validator + ValidationPipe approach used for HTTP — but it requires a different pipe setup:

// dto/send-message.dto.ts
import { IsString, IsNotEmpty, MaxLength, IsUUID } from 'class-validator';

export class SendMessageDto {
  @IsUUID()
  roomId: string;

  @IsString()
  @IsNotEmpty()
  @MaxLength(2000)
  content: string;
}
// To validate WebSocket message bodies, apply the pipe at the method level:
import { UsePipes, ValidationPipe } from '@nestjs/common';

@SubscribeMessage('sendMessage')
@UsePipes(new ValidationPipe({ whitelist: true }))
async handleSendMessage(@MessageBody() dto: SendMessageDto, ...) { ... }

Pushing Events from Outside the Gateway (IHubContext Equivalent)

In SignalR you inject IHubContext<THub> into any service to push to connected clients. NestJS has the same pattern — inject the Gateway and use its server property:

// notification.service.ts
import { Injectable } from '@nestjs/common';
import { ChatGateway } from './chat.gateway';

@Injectable()
export class NotificationService {
  constructor(private readonly chatGateway: ChatGateway) {}

  // Equivalent to _hubContext.Clients.Group(roomId).SendAsync(...)
  async notifyRoom(roomId: string, payload: unknown): Promise<void> {
    this.chatGateway.server.to(roomId).emit('systemNotification', payload);
  }

  // Push to a specific connected client by socket ID
  async notifyClient(socketId: string, event: string, payload: unknown): Promise<void> {
    this.chatGateway.server.to(socketId).emit(event, payload);
  }

  // Broadcast to all connected clients — equivalent to Clients.All.SendAsync(...)
  async broadcast(event: string, payload: unknown): Promise<void> {
    this.chatGateway.server.emit(event, payload);
  }
}

Register the Gateway in the module:

// chat.module.ts
import { Module } from '@nestjs/common';
import { ChatGateway } from './chat.gateway';
import { ChatService } from './chat.service';
import { NotificationService } from './notification.service';

@Module({
  providers: [ChatGateway, ChatService, NotificationService],
  exports: [NotificationService],
})
export class ChatModule {}

Scaling with Redis Adapter

A single NestJS process only knows about sockets connected to it. If you run multiple instances (as you would behind a load balancer), events emitted in one process won’t reach sockets connected to another. This is the same problem SignalR solves with its Redis backplane — and the solution is structurally identical.

// main.ts — add Redis adapter
import { NestFactory } from '@nestjs/core';
import { IoAdapter } from '@nestjs/platform-socket.io';
import { createAdapter } from '@socket.io/redis-adapter';
import { createClient } from 'redis';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);

  // Create Redis pub/sub clients
  const pubClient = createClient({ url: process.env.REDIS_URL });
  const subClient = pubClient.duplicate();
  await Promise.all([pubClient.connect(), subClient.connect()]);

  // Apply the Redis adapter — equivalent to AddSignalR().AddStackExchangeRedis(...)
  const redisAdapter = createAdapter(pubClient, subClient);
  const ioAdapter = new IoAdapter(app);
  ioAdapter.createIOServer = (port, options) => {
    const server = super.createIOServer(port, options);
    server.adapter(redisAdapter);
    return server;
  };
  app.useWebSocketAdapter(ioAdapter);

  await app.listen(3000);
}
bootstrap();

In practice, a cleaner approach is to create a custom RedisIoAdapter:

// adapters/redis-io.adapter.ts
import { IoAdapter } from '@nestjs/platform-socket.io';
import { ServerOptions } from 'socket.io';
import { createAdapter } from '@socket.io/redis-adapter';
import { createClient } from 'redis';

export class RedisIoAdapter extends IoAdapter {
  private adapterConstructor: ReturnType<typeof createAdapter>;

  async connectToRedis(): Promise<void> {
    const pubClient = createClient({ url: process.env.REDIS_URL });
    const subClient = pubClient.duplicate();
    await Promise.all([pubClient.connect(), subClient.connect()]);
    this.adapterConstructor = createAdapter(pubClient, subClient);
  }

  createIOServer(port: number, options?: ServerOptions) {
    const server = super.createIOServer(port, options);
    server.adapter(this.adapterConstructor);
    return server;
  }
}

// main.ts
const redisIoAdapter = new RedisIoAdapter(app);
await redisIoAdapter.connectToRedis();
app.useWebSocketAdapter(redisIoAdapter);

Client-Side Integration (React / Vue)

// hooks/useChatSocket.ts — React hook
import { useEffect, useRef, useCallback } from 'react';
import { io, Socket } from 'socket.io-client';

interface Message {
  id: string;
  content: string;
  userId: string;
  timestamp: string;
}

interface UseChatSocketOptions {
  roomId: string;
  userId: string;
  onMessage: (msg: Message) => void;
  onUserJoined: (data: { userId: string }) => void;
}

export function useChatSocket({
  roomId,
  userId,
  onMessage,
  onUserJoined,
}: UseChatSocketOptions) {
  const socketRef = useRef<Socket | null>(null);

  useEffect(() => {
    const socket = io('http://localhost:3000/chat', {
      auth: { userId },               // Sent in handshake — available as client.handshake.auth
      transports: ['websocket'],
      reconnectionAttempts: 5,        // Socket.io handles reconnection automatically
      reconnectionDelay: 1000,
    });

    socketRef.current = socket;

    socket.on('connect', () => {
      // Join the room after connecting
      socket.emit('joinRoom', { roomId });
    });

    socket.on('messageReceived', onMessage);
    socket.on('userJoined', onUserJoined);

    socket.on('connect_error', (err) => {
      console.error('Socket connection error:', err.message);
    });

    return () => {
      socket.emit('leaveRoom', { roomId });
      socket.disconnect();
    };
  }, [roomId, userId]);

  const sendMessage = useCallback(
    (content: string) => {
      socketRef.current?.emit('sendMessage', { roomId, content });
    },
    [roomId],
  );

  return { sendMessage };
}

Server-Sent Events as a Simpler Alternative

For one-directional streaming (server to client only), Server-Sent Events are simpler than WebSockets. They work over plain HTTP, need no special adapter, and browsers handle reconnection automatically. Use SSE when you don’t need the client to send messages back over the same channel.

// notifications.controller.ts
import { Controller, Get, Req, Res, Sse, MessageEvent, Param } from '@nestjs/common';
import { Observable, Subject, map, filter } from 'rxjs';
import { Request, Response } from 'express';

@Controller('notifications')
export class NotificationsController {
  // A shared subject — in production, use Redis Pub/Sub instead
  private events$ = new Subject<{ userId: string; payload: unknown }>();

  @Sse('stream/:userId')
  // NestJS @Sse() sets Content-Type: text/event-stream automatically
  stream(@Param('userId') userId: string): Observable<MessageEvent> {
    return this.events$.pipe(
      filter((event) => event.userId === userId),
      map((event) => ({
        data: JSON.stringify(event.payload),
        type: 'notification',
      })),
    );
  }

  // Called from a service to push an event to a specific user
  push(userId: string, payload: unknown) {
    this.events$.next({ userId, payload });
  }
}
// Client-side SSE — native browser API, no library needed
const source = new EventSource('/notifications/stream/user-123');
source.addEventListener('notification', (event) => {
  const payload = JSON.parse(event.data);
  console.log('Received:', payload);
});
source.onerror = () => {
  // Browser retries automatically after an error
  console.log('SSE connection error — browser will retry');
};

Key Differences

ConceptSignalR (.NET)NestJS (Socket.io)
Hub / Gateway classclass ChatHub : Hub@WebSocketGateway() class ChatGateway
Client-callable methodpublic async Task SendMessage(...)@SubscribeMessage('sendMessage')
Push to group / roomClients.Group(id).SendAsync('event', data)server.to(roomId).emit('event', data)
Push to all clientsClients.All.SendAsync('event', data)server.emit('event', data)
Push from serviceIHubContext<THub> injectedInject ChatGateway, use .server
Group membershipGroups.AddToGroupAsync(connId, groupId)client.join(roomId)
Connection lifecycleOnConnectedAsync, OnDisconnectedAsynchandleConnection, handleDisconnect
Connection IDContext.ConnectionIdclient.id
User data on connectionContext.Userclient.handshake.auth, client.data
ScalingRedis backplane via AddStackExchangeRedis()@socket.io/redis-adapter
Transport negotiationAutomatic (WS → SSE → LP)Manual — configure transports option
Client library@microsoft/signalr (npm)socket.io-client (npm)
ProtocolCustom SignalR protocol over WS/HTTPSocket.io protocol over WS/HTTP
NamespaceHub URLnamespace option on @WebSocketGateway()

Gotchas for .NET Engineers

1. Socket.io is not standard WebSocket — the protocols are incompatible

SignalR clients use the SignalR protocol. Socket.io clients use the Socket.io protocol. Neither can connect to the other’s server. This matters when:

  • You have mobile clients using a native WebSocket library (they cannot connect to a Socket.io server without the Socket.io protocol layer)
  • You want to test your gateway with wscat or browser DevTools WebSocket inspector — raw WebSocket tools will show garbage because Socket.io wraps messages in its own framing

If you need standards-compliant WebSocket, use the native ws adapter instead of Socket.io:

npm install @nestjs/platform-ws ws
// main.ts — use the native ws adapter
import { WsAdapter } from '@nestjs/platform-ws';
app.useWebSocketAdapter(new WsAdapter(app));

You lose rooms, reconnection, and automatic transport negotiation — but you gain protocol compatibility with any WebSocket client.

2. SignalR reconnects automatically — Socket.io reconnects but re-joins rooms manually

In SignalR, group membership is preserved across reconnections when using a backplane. In Socket.io, rooms are per-connection: when a client reconnects, it gets a new socket ID and is not in any rooms. The client must re-join rooms after reconnecting.

// Client must handle this explicitly
socket.on('connect', () => {
  // Re-join any rooms the user was in before disconnect
  socket.emit('joinRoom', { roomId: currentRoomId });
});

Design your client to always re-join rooms in the connect event handler, not just on the initial connection.

3. Authentication on WebSocket connections does not work the same as HTTP guards

NestJS HTTP guards use the request object to extract JWT tokens from the Authorization header. WebSocket connections upgrade from HTTP, but once the WebSocket connection is established, headers are not sent on subsequent messages — they are only available during the handshake.

// ws-jwt.guard.ts
import { CanActivate, ExecutionContext, Injectable } from '@nestjs/common';
import { JwtService } from '@nestjs/jwt';
import { WsException } from '@nestjs/websockets';
import { Socket } from 'socket.io';

@Injectable()
export class WsJwtGuard implements CanActivate {
  constructor(private readonly jwtService: JwtService) {}

  canActivate(context: ExecutionContext): boolean {
    const client: Socket = context.switchToWs().getClient<Socket>();

    // Token must be in the handshake auth object, NOT in a header
    const token = client.handshake.auth.token as string | undefined;

    if (!token) {
      throw new WsException('Missing authentication token');
    }

    try {
      const payload = this.jwtService.verify(token);
      client.data.user = payload; // Attach to socket data for later use
      return true;
    } catch {
      throw new WsException('Invalid token');
    }
  }
}

The client must send the token in auth, not a header:

const socket = io('http://localhost:3000/chat', {
  auth: { token: localStorage.getItem('accessToken') },
});

4. Injecting the Gateway into services creates circular dependency risks

When a service injects ChatGateway and ChatGateway also injects that service (for example, ChatService), you get a circular dependency. NestJS will warn about this at startup. Break the cycle with forwardRef():

// notification.service.ts
import { Injectable, Inject, forwardRef } from '@nestjs/common';
import { ChatGateway } from './chat.gateway';

@Injectable()
export class NotificationService {
  constructor(
    @Inject(forwardRef(() => ChatGateway))
    private readonly chatGateway: ChatGateway,
  ) {}
}

A cleaner architectural pattern is to have an event bus (using NestJS’s EventEmitter2 or RxJS subjects) that the gateway listens to, rather than injecting the gateway directly into services.

5. The @WebSocketGateway() port option behaves differently from what you expect

By default, @WebSocketGateway() attaches to the same HTTP server port that NestJS uses. If you pass a port argument like @WebSocketGateway(3001), NestJS creates a separate HTTP/WebSocket server on that port — it is not a sub-path of your main server. This can cause CORS issues and complicates deployment. Leave the port unspecified to share the main server port.

// Shares the main HTTP server port — the correct approach for most deployments
@WebSocketGateway({ namespace: '/chat', cors: { origin: process.env.FRONTEND_URL } })

// Creates a SEPARATE server on port 3001 — rarely what you want
@WebSocketGateway(3001)

6. Without the Redis adapter, horizontal scaling silently fails

If you deploy two NestJS instances without the Redis adapter, emitting to a room in instance A will only reach clients connected to instance A. Clients on instance B get nothing. There is no error — the emit simply vanishes. Always add the Redis adapter before deploying behind a load balancer, and verify it is working before scaling beyond one instance.


Hands-On Exercise

Build a real-time collaborative document presence system — users see who else is viewing a document.

Requirements:

  1. Create a PresenceGateway with these events:

    • Client sends joinDocument with { documentId: string } — server adds the client to the document room and broadcasts presenceUpdated to the room with the full list of connected users
    • Client sends leaveDocument — server removes from room, broadcasts updated list
    • handleDisconnect — removes the disconnected user from all rooms they were in, broadcasts updated lists
  2. Create a PresenceService that tracks which users are in which document rooms (use a Map<documentId, Set<userId>>).

  3. Create a DocumentsController with a GET /documents/:id/presence HTTP endpoint that returns the current presence list for a document (not via WebSocket — via regular REST).

  4. Add JWT authentication to the gateway using the handshake auth token approach.

  5. Create a React hook useDocumentPresence(documentId: string) that:

    • Connects and joins the document room on mount
    • Listens for presenceUpdated events and maintains a users: string[] state
    • Leaves the room and disconnects on unmount

Stretch goal: Replace the in-memory Map with Redis so the presence tracking works across multiple server instances. Use ioredis and expire keys after 60 seconds of inactivity.


Quick Reference

TaskSignalRNestJS / Socket.io
Define Hub/Gatewayclass MyHub : Hub@WebSocketGateway() class MyGateway
Client-callable eventpublic Task MyMethod(...)@SubscribeMessage('myEvent')
Get socket/connection IDContext.ConnectionIdclient.id
Store data on connectionContext.Itemsclient.data.key = value
Push to room / groupClients.Group(id).SendAsync(...)server.to(id).emit('event', data)
Push to this client onlyClients.Caller.SendAsync(...)client.emit('event', data)
Push to others in roomClients.OthersInGroup(id).SendAsync(...)client.to(id).emit('event', data)
Push to all connectedClients.All.SendAsync(...)server.emit('event', data)
Join roomGroups.AddToGroupAsync(connId, group)await client.join(roomId)
Leave roomGroups.RemoveFromGroupAsync(...)await client.leave(roomId)
Push from serviceInject IHubContext<THub>Inject gateway, use gateway.server
Connection hookoverride OnConnectedAsync()handleConnection(client: Socket)
Disconnect hookoverride OnDisconnectedAsync()handleDisconnect(client: Socket)
Authenticate[Authorize] + cookie/token in headerWsJwtGuard + handshake.auth.token
Validate message body[Required] on model@UsePipes(ValidationPipe) + DTO
SSE (one-way stream)IActionResult + PushStreamContent@Sse() returning Observable<MessageEvent>
Scale across instancesAddStackExchangeRedis()@socket.io/redis-adapter
Client library@microsoft/signalrsocket.io-client
Connect (client)new HubConnectionBuilder().withUrl(url).build()io('http://localhost:3000/namespace', { auth: {...} })
Reconnection (client)AutomaticAutomatic, but rooms must be re-joined

Further Reading

Background Jobs and Task Scheduling

For .NET engineers who know: IHostedService, BackgroundService, Hangfire queues and recurring jobs, and the Worker Service project template You’ll learn: How NestJS handles background processing with BullMQ (queue-based jobs) and @nestjs/schedule (cron), and why Node.js’s single-threaded nature changes the rules for CPU-intensive work Time: 15-20 min read


The .NET Way (What You Already Know)

.NET’s background processing ecosystem has three layers:

IHostedService / BackgroundService — long-running in-process services, started and stopped with the application lifecycle:

public class EmailWorker : BackgroundService
{
    private readonly IEmailQueue _queue;
    private readonly ILogger<EmailWorker> _logger;

    public EmailWorker(IEmailQueue queue, ILogger<EmailWorker> logger)
    {
        _queue = queue;
        _logger = logger;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        while (!stoppingToken.IsCancellationRequested)
        {
            var job = await _queue.DequeueAsync(stoppingToken);
            if (job != null)
            {
                await ProcessEmailAsync(job);
            }
        }
    }
}

Hangfire — a production-grade job queue with a Redis or SQL Server backend, retries, dashboards, and recurring jobs:

// Enqueue a fire-and-forget job
BackgroundJob.Enqueue(() => emailService.SendWelcomeEmail(userId));

// Schedule a delayed job
BackgroundJob.Schedule(() => invoiceService.SendReminder(invoiceId),
    TimeSpan.FromDays(3));

// Recurring job — cron expression
RecurringJob.AddOrUpdate("daily-report",
    () => reportService.GenerateDailyReport(),
    Cron.Daily(8)); // Every day at 08:00

// Continuation job — runs after another completes
var jobId = BackgroundJob.Enqueue(() => ProcessOrder(orderId));
BackgroundJob.ContinueJobWith(jobId, () => SendConfirmationEmail(orderId));

Worker Service — a separate process for CPU-heavy or independently deployable background work, typically talking to the queue via message broker.

.NET’s key advantage: BackgroundService runs in a thread pool. CPU-intensive work in a background thread does not block the main request-handling threads. The runtime manages this for you.


The NestJS Way

NestJS uses two libraries for background work:

  • BullMQ via @nestjs/bullmq — Redis-backed job queue. This is the Hangfire equivalent: fire-and-forget, delayed, scheduled, and retried jobs with a monitoring dashboard.
  • @nestjs/schedule — cron-based scheduling for recurring tasks. This is the RecurringJob.AddOrUpdate() equivalent, implemented with node-cron.

Installation

# BullMQ — queue-based background processing
npm install @nestjs/bullmq bullmq ioredis

# Schedule — cron jobs
npm install @nestjs/schedule

# Bull Board — monitoring dashboard (equivalent to Hangfire Dashboard)
npm install @bull-board/api @bull-board/express

Setting Up BullMQ

Register the queue module once, referencing your Redis connection:

// app.module.ts
import { Module } from '@nestjs/common';
import { BullModule } from '@nestjs/bullmq';
import { ScheduleModule } from '@nestjs/schedule';

@Module({
  imports: [
    // BullMQ — connect to Redis
    BullModule.forRoot({
      connection: {
        host: process.env.REDIS_HOST ?? 'localhost',
        port: Number(process.env.REDIS_PORT) || 6379,
        password: process.env.REDIS_PASSWORD,
      },
    }),

    // Scheduler — activates @Cron() decorators
    ScheduleModule.forRoot(),

    // Register individual queues
    BullModule.registerQueue({ name: 'email' }),
    BullModule.registerQueue({ name: 'reports' }),
    BullModule.registerQueue({ name: 'image-processing' }),
  ],
})
export class AppModule {}

Defining a Processor (the Worker)

A processor is a class decorated with @Processor() that handles jobs from a named queue. This is the equivalent of implementing Execute in a Hangfire job class, or the ExecuteAsync loop in a BackgroundService:

// email.processor.ts
import { Processor, WorkerHost, OnWorkerEvent } from '@nestjs/bullmq';
import { Job } from 'bullmq';
import { Logger } from '@nestjs/common';
import { EmailService } from './email.service';

// Job data shapes — define these explicitly, like Hangfire job arguments
export interface WelcomeEmailJobData {
  userId: string;
  email: string;
  firstName: string;
}

export interface InvoiceReminderJobData {
  invoiceId: string;
  customerId: string;
  daysOverdue: number;
}

// Union of all job types this processor handles
export type EmailJobData = WelcomeEmailJobData | InvoiceReminderJobData;

// @Processor('queue-name') — binds this class to the 'email' queue
@Processor('email')
export class EmailProcessor extends WorkerHost {
  private readonly logger = new Logger(EmailProcessor.name);

  constructor(private readonly emailService: EmailService) {
    super();
  }

  // process() is called for every job dequeued — equivalent to Execute() in Hangfire
  async process(job: Job<EmailJobData>): Promise<void> {
    this.logger.log(`Processing job ${job.id}, name: ${job.name}`);

    switch (job.name) {
      case 'welcome':
        await this.handleWelcomeEmail(job as Job<WelcomeEmailJobData>);
        break;
      case 'invoice-reminder':
        await this.handleInvoiceReminder(job as Job<InvoiceReminderJobData>);
        break;
      default:
        this.logger.warn(`Unknown job name: ${job.name}`);
    }
  }

  private async handleWelcomeEmail(job: Job<WelcomeEmailJobData>): Promise<void> {
    const { userId, email, firstName } = job.data;
    await this.emailService.sendWelcome({ userId, email, firstName });
  }

  private async handleInvoiceReminder(job: Job<InvoiceReminderJobData>): Promise<void> {
    const { invoiceId, customerId, daysOverdue } = job.data;
    await this.emailService.sendInvoiceReminder({ invoiceId, customerId, daysOverdue });
  }

  // Lifecycle events — equivalent to Hangfire's IServerFilter
  @OnWorkerEvent('completed')
  onCompleted(job: Job) {
    this.logger.log(`Job ${job.id} completed in ${job.processedOn! - job.timestamp}ms`);
  }

  @OnWorkerEvent('failed')
  onFailed(job: Job, error: Error) {
    this.logger.error(`Job ${job.id} failed: ${error.message}`, error.stack);
  }

  @OnWorkerEvent('stalled')
  onStalled(jobId: string) {
    this.logger.warn(`Job ${jobId} stalled — worker crashed during processing`);
  }
}

Enqueuing Jobs from a Service

// user.service.ts
import { Injectable } from '@nestjs/common';
import { InjectQueue } from '@nestjs/bullmq';
import { Queue } from 'bullmq';
import { WelcomeEmailJobData, InvoiceReminderJobData } from './email.processor';

@Injectable()
export class UserService {
  constructor(
    @InjectQueue('email') private readonly emailQueue: Queue,
  ) {}

  async registerUser(data: { email: string; firstName: string }): Promise<void> {
    // Enqueue a fire-and-forget job — equivalent to BackgroundJob.Enqueue(...)
    await this.emailQueue.add(
      'welcome',                        // Job name — used to route in processor
      { userId: 'generated-uuid', ...data } satisfies WelcomeEmailJobData,
      {
        attempts: 3,                    // Retry up to 3 times — Hangfire default is also retries
        backoff: {
          type: 'exponential',          // Wait 1s, 2s, 4s between retries
          delay: 1000,
        },
        removeOnComplete: { count: 100 }, // Keep last 100 completed jobs for debugging
        removeOnFail: { count: 50 },
      },
    );
  }

  async scheduleInvoiceReminder(invoiceId: string, customerId: string): Promise<void> {
    // Delayed job — equivalent to BackgroundJob.Schedule(..., TimeSpan.FromDays(3))
    await this.emailQueue.add(
      'invoice-reminder',
      { invoiceId, customerId, daysOverdue: 3 } satisfies InvoiceReminderJobData,
      {
        delay: 3 * 24 * 60 * 60 * 1000, // 3 days in milliseconds
        attempts: 5,
        backoff: { type: 'fixed', delay: 5 * 60 * 1000 }, // retry every 5 minutes
      },
    );
  }
}

Recurring Jobs with @nestjs/schedule

For cron-style recurring tasks, @nestjs/schedule is the tool. It is simpler than BullMQ — it runs in-process with no Redis dependency, no retry logic, and no persistence. Use it for lightweight recurring work (report generation, cache warming, cleanup). Use BullMQ for anything that needs reliability, retries, or visibility.

// report.scheduler.ts
import { Injectable, Logger } from '@nestjs/common';
import { Cron, CronExpression, Interval, Timeout } from '@nestjs/schedule';
import { ReportService } from './report.service';

@Injectable()
export class ReportScheduler {
  private readonly logger = new Logger(ReportScheduler.name);

  constructor(private readonly reportService: ReportService) {}

  // Standard cron expression — equivalent to RecurringJob.AddOrUpdate("daily-report", ..., "0 8 * * *")
  @Cron('0 8 * * *', { timeZone: 'America/New_York' })
  async generateDailyReport(): Promise<void> {
    this.logger.log('Generating daily report...');
    await this.reportService.generateDaily();
  }

  // CronExpression enum provides common expressions without magic strings
  @Cron(CronExpression.EVERY_HOUR)
  async refreshExchangeRates(): Promise<void> {
    await this.reportService.updateExchangeRates();
  }

  // Fixed interval — equivalent to a Timer-based BackgroundService
  @Interval(30_000) // Every 30 seconds
  async checkExternalApiHealth(): Promise<void> {
    await this.reportService.pingExternalApis();
  }

  // One-shot delayed execution on startup — equivalent to Task.Delay() at start of ExecuteAsync
  @Timeout(5000) // Run once, 5 seconds after application starts
  async seedCacheOnStartup(): Promise<void> {
    this.logger.log('Warming cache after startup...');
    await this.reportService.warmCache();
  }
}

Register the scheduler in its module:

// report.module.ts
import { Module } from '@nestjs/common';
import { BullModule } from '@nestjs/bullmq';
import { ReportScheduler } from './report.scheduler';
import { ReportService } from './report.service';
import { EmailProcessor } from './email.processor';
import { EmailService } from './email.service';

@Module({
  imports: [BullModule.registerQueue({ name: 'email' })],
  providers: [ReportScheduler, ReportService, EmailProcessor, EmailService],
})
export class ReportModule {}

Bull Board — The Monitoring Dashboard

Bull Board is the equivalent of the Hangfire Dashboard. It shows queued, active, completed, failed, and delayed jobs, with the ability to retry failed jobs manually.

// main.ts — add Bull Board
import { NestFactory } from '@nestjs/core';
import { createBullBoard } from '@bull-board/api';
import { BullMQAdapter } from '@bull-board/api/bullMQAdapter';
import { ExpressAdapter } from '@bull-board/express';
import { Queue } from 'bullmq';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);

  // Set up Bull Board — access at /admin/queues
  const serverAdapter = new ExpressAdapter();
  serverAdapter.setBasePath('/admin/queues');

  // Get queue instances — you can also inject these via the module
  const emailQueue = new Queue('email', {
    connection: { host: process.env.REDIS_HOST ?? 'localhost' },
  });

  createBullBoard({
    queues: [new BullMQAdapter(emailQueue)],
    serverAdapter,
  });

  // Mount as Express middleware
  const expressApp = app.getHttpAdapter().getInstance();
  expressApp.use('/admin/queues', serverAdapter.getRouter());

  await app.listen(3000);
}

Protect the dashboard route with an auth middleware in production — the Hangfire Dashboard equivalent of UseHangfireDashboard(options => { options.Authorization = [...] }).

Running Processors in a Separate Worker Process

For true isolation (the Node.js equivalent of a .NET Worker Service), run the processor in a separate process that only imports the queue module and processor, with no HTTP server:

// apps/worker/src/main.ts — separate entry point
import { NestFactory } from '@nestjs/core';
import { WorkerModule } from './worker.module';

async function bootstrap() {
  const app = await NestFactory.createApplicationContext(WorkerModule);
  // No HTTP listener — this process only processes queue jobs
  console.log('Worker process started');
}
bootstrap();
// apps/worker/src/worker.module.ts
import { Module } from '@nestjs/common';
import { BullModule } from '@nestjs/bullmq';
import { EmailProcessor } from './email.processor';
import { EmailService } from './email.service';

@Module({
  imports: [
    BullModule.forRoot({ connection: { host: process.env.REDIS_HOST } }),
    BullModule.registerQueue({ name: 'email' }),
  ],
  providers: [EmailProcessor, EmailService],
})
export class WorkerModule {}

This pattern maps directly to the .NET Worker Service project: a separate deployable unit that consumes from the queue without serving HTTP traffic.

CPU-Intensive Work: Worker Threads

This is where Node.js diverges fundamentally from .NET. In .NET, a BackgroundService runs in the thread pool. CPU-intensive work in a background thread does not block request handling threads — the runtime schedules both concurrently.

In Node.js, there is one thread. A CPU-intensive task (image resizing, PDF generation, complex computation) running in a BullMQ processor blocks the entire process — including all other queued job processing. The event loop stalls for the duration.

The solution is worker_threads (the Node.js equivalent of spawning a .NET thread for CPU work):

// image.processor.ts
import { Processor, WorkerHost } from '@nestjs/bullmq';
import { Job } from 'bullmq';
import { Worker } from 'worker_threads';
import * as path from 'path';

export interface ImageResizeJobData {
  inputPath: string;
  outputPath: string;
  width: number;
  height: number;
}

@Processor('image-processing')
export class ImageProcessor extends WorkerHost {
  async process(job: Job<ImageResizeJobData>): Promise<void> {
    // Offload CPU-intensive work to a worker thread — does not block the event loop
    return new Promise((resolve, reject) => {
      const worker = new Worker(
        path.join(__dirname, 'image-resize.worker.js'),
        { workerData: job.data },
      );
      worker.on('message', resolve);
      worker.on('error', reject);
      worker.on('exit', (code) => {
        if (code !== 0) {
          reject(new Error(`Worker stopped with exit code ${code}`));
        }
      });
    });
  }
}
// image-resize.worker.ts — runs in a separate thread
import { workerData, parentPort } from 'worker_threads';
import sharp from 'sharp'; // Example: image processing library

async function resize() {
  const { inputPath, outputPath, width, height } = workerData;

  await sharp(inputPath)
    .resize(width, height)
    .toFile(outputPath);

  parentPort?.postMessage({ success: true, outputPath });
}

resize().catch((err) => {
  throw err;
});

For most I/O-bound work (database queries, HTTP calls, file reads) you do not need worker threads — Node.js’s async I/O handles these efficiently without blocking. Worker threads are only needed for synchronous CPU computation.


Key Differences

Concept.NET (Hangfire / BackgroundService)NestJS (BullMQ / @nestjs/schedule)
Queue backendRedis, SQL Server, or Azure Service BusRedis (BullMQ requires Redis)
Fire-and-forget jobBackgroundJob.Enqueue(...)queue.add('jobName', data)
Delayed jobBackgroundJob.Schedule(..., delay)queue.add('name', data, { delay: ms })
Recurring jobRecurringJob.AddOrUpdate(...)@Cron('0 8 * * *') on a method
Retry configuration[AutomaticRetry(Attempts = 5)]{ attempts: 5, backoff: {...} } in job options
Worker / processor classImplement Execute(PerformContext)Extend WorkerHost, implement process(job)
Job dataMethod parameters, serializedjob.data object, typed via generics
Monitoring dashboardHangfire DashboardBull Board at /admin/queues
ConcurrencyThread pool — free to use CPUEvent loop — CPU work needs worker_threads
CPU-intensive workFine in BackgroundService threadMust use worker_threads or separate process
Cron schedulingRecurringJob.AddOrUpdate(..., Cron.Daily)@Cron(CronExpression.EVERY_DAY_AT_8AM)
In-process timerSystem.Threading.PeriodicTimer@Interval(30000)
One-shot on startupOverride StartAsync()@Timeout(5000)
Separate worker process.NET Worker Service projectSeparate NestFactory.createApplicationContext()
Job continuationBackgroundJob.ContinueJobWith(id, ...)BullMQ Flows (FlowProducer)

Gotchas for .NET Engineers

1. CPU-intensive jobs block the entire Node.js process

This is the most consequential difference from .NET. In a Hangfire worker, you can perform CPU-heavy work in the job’s Execute method without affecting other workers or the web server — the thread pool handles parallelism. In a BullMQ processor running in the main Node.js process, CPU work blocks the event loop.

The symptoms: all other queued jobs stop processing, HTTP requests time out, health checks fail. The job eventually completes, but everything waits.

The solutions, in order of preference:

  1. Use an external service or library that does the CPU work asynchronously at the OS level (for example, sharp for images uses native bindings that release the event loop)
  2. Use worker_threads for synchronous CPU computation (as shown above)
  3. Run the processor in a completely separate process (createApplicationContext)

Rule of thumb: if the operation takes more than 10ms of synchronous JavaScript execution (not I/O wait), move it to a worker thread.

2. @nestjs/schedule cron jobs are not persisted and do not survive restarts

Hangfire stores recurring job schedules in the database. If your server restarts at 07:58 and a job was scheduled for 08:00, Hangfire will run it when the server comes back. @nestjs/schedule cron jobs exist only in memory — if the process restarts mid-schedule, the next run is determined from the cron expression relative to when the process started, not from when the last run occurred.

For reliable recurring jobs where missed runs matter, enqueue them via BullMQ with a repeatable job instead:

// Persistent repeating job — survives process restarts
await this.reportQueue.add(
  'daily-report',
  {},
  {
    repeat: {
      pattern: '0 8 * * *', // Cron expression
      tz: 'America/New_York',
    },
    jobId: 'daily-report-unique', // Prevents duplicate registrations on restart
  },
);

BullMQ stores repeatable job schedules in Redis, so a restart does not lose the schedule.

3. Multiple instances will all run @Cron() jobs concurrently

In a .NET deployment with multiple instances, Hangfire’s database-based locking ensures only one instance runs each recurring job. @nestjs/schedule has no such coordination — if you run three instances of your NestJS app, all three will fire the @Cron('0 8 * * *') handler at 08:00. You get three runs instead of one.

The solutions:

  • Use BullMQ repeatable jobs (Redis-coordinated, runs once across all instances)
  • Add a distributed lock around the cron handler:
@Cron(CronExpression.EVERY_DAY_AT_8AM)
async generateDailyReport(): Promise<void> {
  // Use ioredis SET NX EX as a distributed lock
  const acquired = await this.redis.set(
    'lock:daily-report',
    '1',
    'EX', 300,  // Lock expires after 5 minutes
    'NX',       // Only set if not exists
  );

  if (!acquired) {
    return; // Another instance got the lock
  }

  try {
    await this.reportService.generateDaily();
  } finally {
    await this.redis.del('lock:daily-report');
  }
}

4. BullMQ requires Redis — there is no SQL Server or in-memory backend

Hangfire supports SQL Server, PostgreSQL, and Redis backends. BullMQ only supports Redis. If your infrastructure does not include Redis, either add it (it is cheap and widely supported, including on Render, Railway, and AWS ElastiCache) or use @nestjs/schedule for scheduling and accept its limitations.

There is no in-memory BullMQ option. For integration tests, use a real Redis instance (via Docker or a test container) or mock the queue entirely.

5. Job data must be serializable — class instances lose their methods

Hangfire serializes job arguments to JSON when enqueuing. BullMQ does the same. In Hangfire, this is enforced at the call site because you pass method arguments. In BullMQ, you pass a plain object — but if you accidentally pass a class instance (with methods and getters), only the serializable data properties survive. Methods and computed properties are gone when the processor receives the job.

// Wrong — the class instance is serialized to JSON, methods are lost
class UserRegistration {
  constructor(public email: string, public firstName: string) {}
  getDisplayName() { return this.firstName; }  // This will be gone
}

await this.queue.add('welcome', new UserRegistration('a@b.com', 'Alice'));
// In the processor: job.data.getDisplayName is undefined

// Correct — use plain objects
await this.queue.add('welcome', {
  email: 'a@b.com',
  firstName: 'Alice',
} satisfies WelcomeEmailJobData);

Always use plain object literals for job data. Define the shape with a TypeScript interface or type, and use satisfies to catch mismatches at the call site.


Hands-On Exercise

Build an order fulfillment system that processes orders asynchronously.

Requirements:

  1. Create an orders BullMQ queue and an OrderProcessor that handles three job types:

    • validate-payment: checks payment status, throws if payment is declined (triggers BullMQ retry)
    • reserve-inventory: decrements stock, marks order items as reserved
    • send-confirmation: sends confirmation email via EmailService
  2. Create an OrderService that, when placeOrder() is called:

    • Saves the order with status: 'pending'
    • Enqueues validate-payment with a 2-retry policy and 5-second exponential backoff
    • Chains reserve-inventory (enqueue only if validate-payment completes, using BullMQ Flows or a completion handler)
  3. Create a CleanupScheduler with a @Cron('0 2 * * *') job that deletes orders older than 90 days with status: 'cancelled'.

  4. Add a @Interval(60_000) method that checks for orders stuck in pending status for more than 10 minutes and logs an alert.

  5. Mount Bull Board at /admin/queues and protect it with a basic auth middleware that reads credentials from environment variables.

Stretch goal: Identify the validate-payment step as potentially CPU-bound (imagine it involves RSA signature verification). Rewrite the handler to offload it to a worker_thread. Measure the event loop impact before and after using perf_hooks.monitorEventLoopDelay().


Quick Reference

Task.NETNestJS
Register queueHangfire UseHangfire()BullModule.registerQueue({ name: 'q' })
Enqueue jobBackgroundJob.Enqueue(...)queue.add('name', data)
Delayed jobBackgroundJob.Schedule(..., delay)queue.add('name', data, { delay: ms })
Recurring via cronRecurringJob.AddOrUpdate(...)@Cron('0 8 * * *') on a method
Persistent recurringHangfire recurring (SQL/Redis backed)queue.add('name', {}, { repeat: { pattern: '...' } })
Interval timerPeriodicTimer or Task.Delay loop@Interval(30000)
One-shot on startoverride StartAsync()@Timeout(5000)
Define processorImplement Execute(PerformContext)@Processor('q') class P extends WorkerHost
Process jobsExecute() methodasync process(job: Job<T>): Promise<void>
Inject queueConstructor injection@InjectQueue('q') private queue: Queue
Job retry[AutomaticRetry(Attempts = 5)]{ attempts: 5, backoff: { type: 'exponential' } }
Job event hooksIServerFilter@OnWorkerEvent('completed')
CPU-intensive workThread pool (automatic)worker_threads (manual)
Monitoring UIHangfire DashboardBull Board (@bull-board/api)
Multi-instance cronHangfire DB locking (automatic)Redis distributed lock (manual)
Separate worker.NET Worker ServicecreateApplicationContext(WorkerModule)
Test queue in CIUseInMemoryStorage()Real Redis via Docker / Testcontainers

Common BullMQ job options:

await queue.add('job-name', data, {
  attempts: 3,
  backoff: {
    type: 'exponential', // or 'fixed'
    delay: 2000,         // Base delay in ms
  },
  delay: 5000,           // Wait before first attempt (ms)
  priority: 1,           // Lower number = higher priority
  removeOnComplete: { count: 100 },  // Keep last N completed jobs
  removeOnFail: { count: 50 },
  jobId: 'unique-id',    // Deduplication key — prevents duplicate jobs
});

Common @nestjs/schedule expressions:

@Cron('* * * * * *')               // Every second
@Cron('0 * * * * *')               // Every minute
@Cron('0 0 * * * *')               // Every hour
@Cron('0 0 8 * * *')               // Daily at 08:00
@Cron('0 0 8 * * 1')               // Every Monday at 08:00
@Cron(CronExpression.EVERY_HOUR)   // Every hour (enum)
@Cron(CronExpression.EVERY_DAY_AT_8AM) // Daily at 08:00 (enum)

Further Reading

Caching Strategies: IMemoryCache to Redis

For .NET engineers who know: IMemoryCache, IDistributedCache, StackExchange.Redis, [ResponseCache], and cache invalidation patterns in ASP.NET Core You’ll learn: How NestJS handles in-process and distributed caching with @nestjs/cache-manager, how client-side caching with TanStack Query replaces server-side output caching, and how to design a layered caching architecture that maps to what you know from .NET Time: 15-20 min read


The .NET Way (What You Already Know)

ASP.NET Core provides caching at multiple layers:

IMemoryCache — in-process, per-server, fast:

public class ProductService
{
    private readonly IMemoryCache _cache;
    private readonly IProductRepository _repo;

    public async Task<Product?> GetProductAsync(string id)
    {
        var cacheKey = $"product:{id}";

        if (_cache.TryGetValue(cacheKey, out Product? cached))
        {
            return cached;
        }

        var product = await _repo.GetByIdAsync(id);

        if (product != null)
        {
            _cache.Set(cacheKey, product, new MemoryCacheEntryOptions
            {
                AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(10),
                SlidingExpiration = TimeSpan.FromMinutes(2),
                Size = 1,
            });
        }

        return product;
    }
}

IDistributedCache with Redis — shared across instances, slower than in-process:

// Registration
builder.Services.AddStackExchangeRedisCache(options =>
{
    options.Configuration = builder.Configuration.GetConnectionString("Redis");
    options.InstanceName = "MyApp:";
});

// Usage — same interface as IMemoryCache, but serialization is manual
public async Task<Product?> GetProductAsync(string id)
{
    var key = $"product:{id}";
    var bytes = await _distributedCache.GetAsync(key);

    if (bytes != null)
    {
        return JsonSerializer.Deserialize<Product>(bytes);
    }

    var product = await _repo.GetByIdAsync(id);

    if (product != null)
    {
        await _distributedCache.SetAsync(
            key,
            JsonSerializer.SerializeToUtf8Bytes(product),
            new DistributedCacheEntryOptions
            {
                AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(10),
            });
    }

    return product;
}

[ResponseCache] — HTTP response caching at the controller action level:

[HttpGet("{id}")]
[ResponseCache(Duration = 60, VaryByQueryKeys = new[] { "id" })]
public async Task<IActionResult> GetProduct(string id)
{
    var product = await _productService.GetProductAsync(id);
    return Ok(product);
}

This framework gives you a consistent interface regardless of the backing store, plus HTTP-level caching for client-visible responses.


The NestJS Way

NestJS handles caching through @nestjs/cache-manager, which wraps the cache-manager library. The interface is similar to IDistributedCache — a generic get/set/del API — with pluggable stores: in-memory (default) or Redis (@keyv/redis or cache-manager-ioredis).

On top of server-side caching, the modern JS stack adds a layer .NET engineers often miss: client-side server-state caching with TanStack Query. This offloads a significant class of caching (avoiding redundant API calls) to the browser, which changes how you think about [ResponseCache] and HTTP cache headers.

Installation

# Core cache manager
npm install @nestjs/cache-manager cache-manager

# Redis store
npm install @keyv/redis keyv

# For ETag / HTTP cache header utilities
npm install etag

In-Memory Caching

// app.module.ts
import { Module } from '@nestjs/common';
import { CacheModule } from '@nestjs/cache-manager';

@Module({
  imports: [
    CacheModule.register({
      isGlobal: true,
      ttl: 60 * 1000, // Default TTL: 60 seconds (milliseconds in cache-manager v5+)
      max: 1000,       // Maximum number of items in the in-memory store
    }),
  ],
})
export class AppModule {}
// product.service.ts
import { Injectable } from '@nestjs/common';
import { Cache } from 'cache-manager';
import { CACHE_MANAGER } from '@nestjs/cache-manager';
import { Inject } from '@nestjs/common';
import { ProductRepository } from './product.repository';

@Injectable()
export class ProductService {
  constructor(
    @Inject(CACHE_MANAGER) private readonly cache: Cache,
    private readonly repo: ProductRepository,
  ) {}

  async getProduct(id: string): Promise<Product | null> {
    const cacheKey = `product:${id}`;

    // Try cache first — equivalent to _cache.TryGetValue(...)
    const cached = await this.cache.get<Product>(cacheKey);
    if (cached !== undefined) {
      return cached;
    }

    // Cache miss — load from DB
    const product = await this.repo.findById(id);

    if (product) {
      // Store with a specific TTL (ms) — equivalent to AbsoluteExpirationRelativeToNow
      await this.cache.set(cacheKey, product, 10 * 60 * 1000); // 10 minutes
    }

    return product ?? null;
  }

  // Cache invalidation — equivalent to _cache.Remove(key)
  async invalidateProduct(id: string): Promise<void> {
    await this.cache.del(`product:${id}`);
  }
}

Redis Integration

For distributed caching across multiple instances, swap the in-memory store for Redis:

// app.module.ts
import { CacheModule } from '@nestjs/cache-manager';
import { createKeyv } from '@keyv/redis';

@Module({
  imports: [
    CacheModule.registerAsync({
      isGlobal: true,
      useFactory: () => ({
        stores: [
          createKeyv(process.env.REDIS_URL ?? 'redis://localhost:6379', {
            namespace: 'myapp', // Key prefix — equivalent to InstanceName in .NET
          }),
        ],
        ttl: 60 * 1000,
      }),
    }),
  ],
})
export class AppModule {}

The ProductService code does not change — the Cache interface is the same regardless of the backing store. This mirrors how IDistributedCache abstracts over the backing store in .NET, except NestJS handles serialization automatically (values are serialized to JSON internally).

Multi-Layer Caching: L1 (In-Memory) + L2 (Redis)

The most robust production setup mirrors what some .NET teams implement manually: an in-memory L1 cache in front of Redis L2. Cache hits in L1 cost microseconds; Redis hits cost ~1ms.

// app.module.ts — two-tier cache
import { CacheModule } from '@nestjs/cache-manager';
import { createKeyv } from '@keyv/redis';
import Keyv from 'keyv';
import KeyvRedis from '@keyv/redis';

@Module({
  imports: [
    CacheModule.registerAsync({
      isGlobal: true,
      useFactory: () => ({
        stores: [
          // L1: in-memory (fast, per-instance)
          new Keyv({ ttl: 30 * 1000 }),      // 30 second in-memory cache

          // L2: Redis (shared, survives restarts)
          createKeyv(process.env.REDIS_URL, { namespace: 'myapp' }),
        ],
        // cache-manager checks L1 first, falls back to L2, then writes back to L1
      }),
    }),
  ],
})
export class AppModule {}

Cache Interceptor (the [ResponseCache] Equivalent)

NestJS ships a CacheInterceptor that caches entire controller responses, equivalent to [ResponseCache] on an ASP.NET Core action. Apply it at the controller or method level:

// products.controller.ts
import { Controller, Get, Param, UseInterceptors } from '@nestjs/common';
import { CacheInterceptor, CacheTTL, CacheKey } from '@nestjs/cache-manager';

@Controller('products')
@UseInterceptors(CacheInterceptor)  // Cache all responses from this controller
export class ProductsController {
  constructor(private readonly productService: ProductService) {}

  @Get()
  @CacheTTL(30 * 1000)              // Override TTL: 30 seconds for this endpoint
  async listProducts() {
    return this.productService.findAll();
  }

  @Get(':id')
  @CacheKey('product-by-id')        // Custom cache key prefix
  @CacheTTL(10 * 60 * 1000)         // 10 minutes
  async getProduct(@Param('id') id: string) {
    return this.productService.getProduct(id);
  }
}

Apply the interceptor globally (equivalent to app.UseResponseCaching() globally):

// main.ts
import { CacheInterceptor } from '@nestjs/cache-manager';
import { APP_INTERCEPTOR } from '@nestjs/core';

// In AppModule providers:
{
  provide: APP_INTERCEPTOR,
  useClass: CacheInterceptor,
}

The CacheInterceptor only caches GET requests and uses the URL as the cache key by default. For more sophisticated key generation (varying by user, query params, or custom headers), extend the interceptor:

// custom-cache.interceptor.ts
import { CacheInterceptor, CACHE_KEY_METADATA } from '@nestjs/cache-manager';
import { Injectable, ExecutionContext } from '@nestjs/common';

@Injectable()
export class UserAwareCacheInterceptor extends CacheInterceptor {
  // Override the key generation to include the authenticated user ID
  // Equivalent to VaryByHeader or custom IResponseCachePolicy in .NET
  protected trackBy(context: ExecutionContext): string | undefined {
    const request = context.switchToHttp().getRequest();
    const baseKey = super.trackBy(context);

    if (!baseKey) return undefined;

    // Scope cached data per user — prevents user A seeing user B's cached data
    const userId = (request.user as { id: string } | undefined)?.id ?? 'anon';
    return `${baseKey}:${userId}`;
  }
}

Cache Invalidation Strategies

Cache invalidation is where most .NET engineers already know the hard truths. The patterns translate directly:

// invalidation.service.ts
import { Injectable, Inject } from '@nestjs/common';
import { CACHE_MANAGER } from '@nestjs/cache-manager';
import { Cache } from 'cache-manager';
import { InjectRedis } from '@nestjs-modules/ioredis'; // If using ioredis directly
import { Redis } from 'ioredis';

@Injectable()
export class CacheInvalidationService {
  constructor(
    @Inject(CACHE_MANAGER) private readonly cache: Cache,
  ) {}

  // Single key invalidation — equivalent to _cache.Remove(key)
  async invalidateProduct(id: string): Promise<void> {
    await this.cache.del(`product:${id}`);
  }

  // Pattern-based invalidation (requires direct Redis access)
  // Equivalent to iterating keys by prefix in .NET IDistributedCache
  async invalidateProductCategory(category: string): Promise<void> {
    // cache-manager does not support pattern-based deletion natively.
    // Use ioredis directly for SCAN + DEL:
    // await this.redis.eval(luaScript, 0, `myapp:product:category:${category}:*`);
    // See note in Gotchas #3 below.
  }

  // Tag-based invalidation with Redis Sets
  // This pattern has no direct .NET equivalent but is superior to key pattern scanning
  async tagProduct(cacheKey: string, productId: string): Promise<void> {
    // When setting a cache entry, also record the key in a tag set
    // Then invalidate all keys for a product by deleting the set members
  }
}

// Pattern for tag-based invalidation using raw Redis
// Register ioredis separately for direct access
async setWithTag(key: string, value: unknown, ttlMs: number, tags: string[]): Promise<void> {
  await this.cache.set(key, value, ttlMs);

  // Record key in each tag's set (tags expire slightly after the cache entries)
  for (const tag of tags) {
    await this.redis.sadd(`tag:${tag}`, key);
    await this.redis.expire(`tag:${tag}`, Math.ceil(ttlMs / 1000) + 60);
  }
}

async invalidateByTag(tag: string): Promise<void> {
  const keys = await this.redis.smembers(`tag:${tag}`);
  if (keys.length > 0) {
    await Promise.all(keys.map(key => this.cache.del(key)));
    await this.redis.del(`tag:${tag}`);
  }
}

ETag Support

ETags let the browser cache a response and revalidate it cheaply — the server returns 304 Not Modified if the resource hasn’t changed, with no body. In ASP.NET Core this is built into the framework. In NestJS, you add it manually with middleware:

// middleware/etag.middleware.ts
import { Injectable, NestMiddleware } from '@nestjs/common';
import { Request, Response, NextFunction } from 'express';
import * as etag from 'etag';

@Injectable()
export class ETagMiddleware implements NestMiddleware {
  use(req: Request, res: Response, next: NextFunction): void {
    // Intercept response and add ETag header
    const originalSend = res.send.bind(res);

    res.send = (body: unknown): Response => {
      if (req.method === 'GET' && res.statusCode === 200) {
        const bodyStr = typeof body === 'string' ? body : JSON.stringify(body);
        const tag = etag(bodyStr);

        res.setHeader('ETag', tag);
        res.setHeader('Cache-Control', 'private, must-revalidate');

        // Check If-None-Match header — 304 if ETag matches
        if (req.headers['if-none-match'] === tag) {
          res.statusCode = 304;
          return originalSend('');
        }
      }

      return originalSend(body);
    };

    next();
  }
}
// app.module.ts
import { MiddlewareConsumer, NestModule } from '@nestjs/common';

export class AppModule implements NestModule {
  configure(consumer: MiddlewareConsumer) {
    consumer.apply(ETagMiddleware).forRoutes('*');
  }
}

Client-Side Caching with TanStack Query

This is a layer .NET engineers often underestimate because the server renders everything in server-centric apps. In a React or Vue SPA consuming a NestJS API, TanStack Query (formerly React Query) provides in-browser caching of server state. It reduces the number of API calls, handles loading/error states, and manages cache staleness — without any server-side code.

// hooks/useProduct.ts (React)
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query';

// Query key factory — use strings/arrays as stable cache keys
const productKeys = {
  all: ['products'] as const,
  list: (filters: ProductFilters) => ['products', 'list', filters] as const,
  detail: (id: string) => ['products', 'detail', id] as const,
};

// Fetching with automatic caching
export function useProduct(id: string) {
  return useQuery({
    queryKey: productKeys.detail(id),
    queryFn: () => fetch(`/api/products/${id}`).then(r => r.json()),
    staleTime: 5 * 60 * 1000,    // Data considered fresh for 5 minutes — no refetch
    gcTime: 10 * 60 * 1000,      // Keep in cache (unused) for 10 minutes
    retry: 3,                     // Retry failed requests 3 times
  });
}

// Mutation with cache invalidation
export function useUpdateProduct() {
  const queryClient = useQueryClient();

  return useMutation({
    mutationFn: (data: UpdateProductDto) =>
      fetch(`/api/products/${data.id}`, {
        method: 'PUT',
        body: JSON.stringify(data),
        headers: { 'Content-Type': 'application/json' },
      }).then(r => r.json()),

    // Invalidate and refetch after update — equivalent to removing the key from IMemoryCache
    onSuccess: (updatedProduct) => {
      // Invalidate the list (may have changed)
      queryClient.invalidateQueries({ queryKey: productKeys.all });

      // Optimistically update the detail cache with the new data
      queryClient.setQueryData(productKeys.detail(updatedProduct.id), updatedProduct);
    },
  });
}

The mental model shift: TanStack Query is not a replacement for server-side caching. It is an additional layer. A request that TanStack Query serves from its cache never reaches the server. A request that does reach the server can still hit the NestJS cache before reaching the database.

CDN and HTTP Cache Headers on Render

For public, non-personalized content, set HTTP cache headers and let a CDN handle it:

// products.controller.ts
import { Controller, Get, Res, HttpCode } from '@nestjs/common';
import { Response } from 'express';

@Controller('products')
export class ProductsController {
  @Get('catalog')
  async getPublicCatalog(@Res({ passthrough: true }) res: Response) {
    // Equivalent to [ResponseCache(Duration = 300, Location = ResponseCacheLocation.Any)]
    res.setHeader('Cache-Control', 'public, max-age=300, s-maxage=3600, stale-while-revalidate=60');
    res.setHeader('Vary', 'Accept-Encoding, Accept-Language');
    return this.productService.getPublicCatalog();
  }

  @Get(':id')
  async getProduct(@Param('id') id: string, @Res({ passthrough: true }) res: Response) {
    // Private — per-user, not cacheable by CDN
    res.setHeader('Cache-Control', 'private, max-age=60, must-revalidate');
    return this.productService.getProduct(id);
  }
}

On Render, static assets and public API responses are cached by Render’s built-in CDN automatically when Cache-Control: public is set.


Key Differences

ConceptASP.NET CoreNestJS
In-memory cacheIMemoryCache@nestjs/cache-manager (in-memory store)
Distributed cacheIDistributedCache + .AddStackExchangeRedisCache()@nestjs/cache-manager + @keyv/redis store
Cache interfaceIMemoryCache.TryGetValue / IDistributedCache.GetAsynccache.get<T>(key) / cache.set(key, value, ttl)
Response caching[ResponseCache] attribute@UseInterceptors(CacheInterceptor)
HTTP cache middlewareapp.UseResponseCaching()Custom middleware or Express express-cache-controller
SerializationManual (JsonSerializer) for IDistributedCacheAutomatic (cache-manager serializes internally)
Key prefix / namespaceInstanceName in optionsnamespace in Keyv store config
Cache TTLAbsoluteExpirationRelativeToNowttl in milliseconds (v5+)
Sliding expirationSlidingExpirationNot natively in cache-manager — use Redis TTL refresh manually
Pattern invalidationSCAN + DEL in StackExchange.RedisDirect ioredis required — not in cache-manager abstraction
ETag supportBuilt-in via UseResponseCachingManual middleware or express-etag
CDN cachingCache-Control headers + Azure CDNCache-Control headers + Render CDN / Cloudflare
Client-side cacheNot in scope for server codeTanStack Query (browser) — major new layer
Output caching[OutputCache] (.NET 7+)CacheInterceptor (similar)

Gotchas for .NET Engineers

1. cache-manager v5 changed TTL units from seconds to milliseconds

If you find examples online showing ttl: 60 and wonder why your cache expires in 60 milliseconds rather than 60 seconds, you have hit the breaking change in cache-manager v5. In v4 and earlier, TTL was in seconds. In v5, TTL is in milliseconds. This change affects every cache.set() call and the default TTL in CacheModule.register().

// Wrong — using seconds (cache-manager v4 style) — data expires in 0.01 seconds
await this.cache.set('key', value, 10);

// Correct — using milliseconds (cache-manager v5+)
await this.cache.set('key', value, 10 * 1000); // 10 seconds

Check your cache-manager version and read the CHANGELOG before copying examples. Most StackOverflow answers and many blog posts still show the v4 API.

2. cache.get() returns undefined for a cache miss, not null

In .NET, IDistributedCache.GetAsync() returns null when a key is not found. In cache-manager, cache.get() returns undefined. This means a simple if (cached) check will incorrectly treat a cached value of false, 0, or empty string as a cache miss.

// Wrong — treats falsy cached values as cache misses
const cached = await this.cache.get<number>('score');
if (cached) { // Fails when cached score is 0
  return cached;
}

// Correct — check for undefined explicitly
const cached = await this.cache.get<number>('score');
if (cached !== undefined) {
  return cached;
}

This distinction also matters if you intentionally cache null to record a “known non-existent” entry (the cache-aside null-caching pattern). cache-manager will store null as a valid value — the check must be !== undefined.

3. Pattern-based cache invalidation requires bypassing the cache-manager abstraction

IDistributedCache in .NET doesn’t support pattern deletion either — you’d use StackExchange.Redis.IDatabase directly for KEYS or SCAN. The same is true in NestJS: cache-manager only provides get, set, del, and reset. For “delete all keys matching product:*”, you need a direct ioredis client.

// Inject ioredis alongside cache-manager for pattern operations
import { Redis } from 'ioredis';

@Injectable()
export class CacheService {
  constructor(
    @Inject(CACHE_MANAGER) private readonly cache: Cache,
    @Inject('REDIS_CLIENT') private readonly redis: Redis,
  ) {}

  async invalidateByPrefix(prefix: string): Promise<void> {
    // SCAN is safer than KEYS in production — non-blocking
    let cursor = '0';
    const keysToDelete: string[] = [];

    do {
      const [nextCursor, keys] = await this.redis.scan(
        cursor,
        'MATCH', `myapp:${prefix}:*`,
        'COUNT', 100,
      );
      cursor = nextCursor;
      keysToDelete.push(...keys);
    } while (cursor !== '0');

    if (keysToDelete.length > 0) {
      await this.redis.del(...keysToDelete);
    }
  }
}

Never use KEYS * in production — it is a blocking O(N) operation that will pause Redis for the duration. Always use SCAN.

4. CacheInterceptor caches based on the request URL — not on the response content

If two requests hit the same URL but receive different responses (because the handler has side effects, or the database changed between requests), CacheInterceptor returns the same cached response for both. This is correct behavior — but it means you must not apply CacheInterceptor to endpoints whose responses depend on:

  • The authenticated user’s identity (unless you override trackBy() to include the user ID in the key)
  • Request headers like Accept-Language or Authorization
  • Write operations (POST, PUT, DELETE) — the interceptor already skips non-GET requests, but be aware

The trackBy() method is your VaryByHeader / VaryByQueryKeys equivalent. Override it whenever the response varies by something other than the URL.

5. TanStack Query staleTime and gcTime are not the same thing — and developers confuse them constantly

Coming from server-side caching, you think of one TTL. TanStack Query has two:

  • staleTime: How long data is considered fresh. During this period, the hook returns cached data without any background refetch — equivalent to max-age in HTTP caching.
  • gcTime (formerly cacheTime): How long unused data stays in the in-memory cache before being garbage collected — equivalent to s-maxage or the IMemoryCache TTL after the component unmounts.
useQuery({
  queryKey: ['products'],
  queryFn: fetchProducts,
  staleTime: 5 * 60 * 1000,  // 5 min: no network request if data is this fresh
  gcTime: 10 * 60 * 1000,    // 10 min: keep in memory even after component unmounts
});

// What this means in practice:
// - 0 to 5 min after last fetch: returns cached data, no network request
// - After 5 min (data is stale): returns cached data BUT triggers background refetch
// - After component unmounts: data stays in cache for 10 min in case it's needed again
// - After 10 min unmounted: data is garbage collected

The default staleTime is 0 — meaning every render triggers a background refetch. For data that doesn’t change often (product catalog, user profile), set a meaningful staleTime.

6. In-memory cache is per-instance and is lost on restart

This is the same limitation as IMemoryCache in .NET — but it catches Node.js engineers by surprise more often because Node.js processes restart more frequently (PM2, container restarts, Render re-deploys). Every restart warms the cache from zero.

Design for this: warm critical caches on startup using a @Timeout() job (see Article 4.6), and prefer Redis for any data where cache misses under load are a problem. Use in-memory only as an L1 in front of Redis, or for data that is genuinely cheap to recompute.


Hands-On Exercise

Build a layered caching system for a product catalog API.

Requirements:

  1. Set up @nestjs/cache-manager with a two-tier configuration: 30-second in-memory L1 and 10-minute Redis L2.

  2. In ProductService, implement the cache-aside pattern for:

    • getProduct(id) — cache individual products for 10 minutes, cache the null result (key not found) for 1 minute to prevent thundering herd on invalid IDs
    • listProducts(category, page) — cache paginated lists for 60 seconds, keyed by products:list:${category}:${page}
  3. Create a ProductCacheService that handles tag-based invalidation:

    • When a product is updated, invalidate product:${id} and all products:list:* keys matching the product’s category
    • Use SCAN rather than KEYS for pattern deletion
  4. Apply CacheInterceptor to the GET /products/catalog endpoint (public, no auth), with a custom trackBy() that includes the Accept-Language header in the cache key for multi-language support.

  5. Add ETag support to GET /products/:id so that clients receive a 304 when the product hasn’t changed. Use a hash of updatedAt as the ETag value.

  6. Create a React hook useProductList(category: string, page: number) using TanStack Query with:

    • staleTime: 60 * 1000 (match the server cache TTL)
    • gcTime: 5 * 60 * 1000
    • Proper query key using the factory pattern

Stretch goal: Implement a cache warming job using @Timeout(0) that, on startup, loads the 20 most-viewed products from the database and pre-populates the Redis cache. Add a BullMQ job to refresh the top-20 list every hour.


Quick Reference

TaskASP.NET CoreNestJS
Register in-memory cacheAddMemoryCache()CacheModule.register({ max: 1000 })
Register Redis cacheAddStackExchangeRedisCache(...)CacheModule.register({ stores: [createKeyv(url)] })
Get from cache_cache.TryGetValue(key, out T value)await cache.get<T>(key) (returns undefined on miss)
Set in cache_cache.Set(key, value, options)await cache.set(key, value, ttlMs)
Delete from cache_cache.Remove(key)await cache.del(key)
Clear all(no built-in for IMemoryCache)await cache.clear()
Inject cacheIMemoryCache or IDistributedCache@Inject(CACHE_MANAGER) private cache: Cache
HTTP response caching[ResponseCache(Duration = 60)]@UseInterceptors(CacheInterceptor) + @CacheTTL(60000)
Vary by userVaryByHeader = "Authorization"Override trackBy() in custom CacheInterceptor
Global interceptorapp.UseResponseCaching(){ provide: APP_INTERCEPTOR, useClass: CacheInterceptor }
TTL unitSeconds (TimeSpan)Milliseconds (cache-manager v5+)
Cache miss valuenullundefined
SerializationManual (JsonSerializer)Automatic (cache-manager handles it)
Key prefixInstanceName optionnamespace in Keyv store
Pattern invalidationIDatabase.KeyDeleteAsync(pattern)Direct ioredis SCAN + DEL
ETagBuilt-in (UseResponseCaching)Manual middleware or etag npm package
Client-side cachingNot applicableTanStack Query (useQuery, staleTime, gcTime)
CDN / public cacheCache-Control: public, max-age=300Same header via res.setHeader()

Cache-Control header cheat sheet:

// Private, short-lived (user-specific data)
res.setHeader('Cache-Control', 'private, max-age=60, must-revalidate');

// Public, CDN-cacheable (catalog, static content)
res.setHeader('Cache-Control', 'public, max-age=300, s-maxage=3600, stale-while-revalidate=60');

// Never cache (sensitive data, real-time)
res.setHeader('Cache-Control', 'no-store, no-cache, must-revalidate');

// Revalidate every time but allow stale while checking
res.setHeader('Cache-Control', 'no-cache'); // Always revalidate with server

TanStack Query decision guide:

Data typestaleTimegcTimeNotes
User profile (infrequent changes)5 min10 minInvalidate on update mutation
Product catalog (changes daily)5 min10 minBackground refetch acceptable
Shopping cart (session-bound)0 (always fresh)5 minUser actions mutate frequently
Real-time data (prices, stock)01 minRefetch on window focus
Static reference data (countries)1 hour24 hoursRarely changes

Further Reading

4.8 — File Uploads & Storage: Azure Blob to S3/Cloudflare R2

For .NET engineers who know: IFormFile, Azure.Storage.Blobs, streaming uploads to Azure Blob Storage You’ll learn: How to handle file uploads in NestJS with Multer, why pre-signed URLs are the preferred architecture, and how to use Cloudflare R2 as an S3-compatible alternative Time: 10-15 minutes


The .NET Way (What You Already Know)

In ASP.NET Core, file uploads arrive through IFormFile. You bind it in a controller action, validate the content type and size, and push the stream to Azure Blob Storage using the Azure.Storage.Blobs SDK:

// C# — ASP.NET Core file upload to Azure Blob Storage
[ApiController]
[Route("api/uploads")]
public class UploadsController : ControllerBase
{
    private readonly BlobServiceClient _blobServiceClient;

    public UploadsController(BlobServiceClient blobServiceClient)
    {
        _blobServiceClient = blobServiceClient;
    }

    [HttpPost]
    [RequestSizeLimit(10_485_760)] // 10 MB
    public async Task<IActionResult> Upload(IFormFile file, CancellationToken ct)
    {
        if (file.Length == 0)
            return BadRequest("Empty file");

        var allowed = new[] { "image/jpeg", "image/png", "image/webp" };
        if (!allowed.Contains(file.ContentType))
            return BadRequest("Unsupported file type");

        var container = _blobServiceClient.GetBlobContainerClient("uploads");
        var blobName = $"{Guid.NewGuid()}{Path.GetExtension(file.FileName)}";
        var blob = container.GetBlobClient(blobName);

        await blob.UploadAsync(file.OpenReadStream(), new BlobHttpHeaders
        {
            ContentType = file.ContentType
        }, cancellationToken: ct);

        return Ok(new { url = blob.Uri });
    }
}

This pattern — browser sends file to your API, API streams it to blob storage — is straightforward but has a scaling problem: every upload consumes a thread and network bandwidth on your API server. For high-volume uploads, you switch to SAS tokens (Shared Access Signatures) so clients upload directly to Blob Storage, bypassing your server entirely.

// C# — Generate a SAS URL for direct client upload
[HttpPost("presign")]
public IActionResult GenerateSasUrl([FromBody] PresignRequest request)
{
    var container = _blobServiceClient.GetBlobContainerClient("uploads");
    var blobName = $"{Guid.NewGuid()}{request.Extension}";
    var blob = container.GetBlobClient(blobName);

    var sasUri = blob.GenerateSasUri(BlobSasPermissions.Write, DateTimeOffset.UtcNow.AddMinutes(15));
    return Ok(new { uploadUrl = sasUri, blobName });
}

The Node.js ecosystem uses the same two patterns but with different libraries and terminology. “SAS tokens” become “pre-signed URLs,” and “Azure Blob Storage” is most commonly replaced by Amazon S3 or Cloudflare R2.


The Node.js Way

Multer: The IFormFile Equivalent

Multer is the standard Express/NestJS middleware for handling multipart/form-data — it is the Node.js equivalent of ASP.NET Core’s built-in IFormFile binding. NestJS ships with a FileInterceptor decorator that wraps Multer.

pnpm add multer
pnpm add -D @types/multer
// TypeScript — NestJS file upload with FileInterceptor
// src/uploads/uploads.controller.ts
import {
    Controller,
    Post,
    UploadedFile,
    UseInterceptors,
    BadRequestException,
    ParseFilePipe,
    MaxFileSizeValidator,
    FileTypeValidator,
} from "@nestjs/common";
import { FileInterceptor } from "@nestjs/platform-express";
import { Express } from "express";
import { UploadsService } from "./uploads.service";

@Controller("uploads")
export class UploadsController {
    constructor(private readonly uploadsService: UploadsService) {}

    @Post()
    @UseInterceptors(FileInterceptor("file"))
    async upload(
        @UploadedFile(
            new ParseFilePipe({
                validators: [
                    new MaxFileSizeValidator({ maxSize: 10 * 1024 * 1024 }), // 10 MB
                    new FileTypeValidator({ fileType: /image\/(jpeg|png|webp)/ }),
                ],
            }),
        )
        file: Express.Multer.File,
    ) {
        const url = await this.uploadsService.uploadToStorage(file);
        return { url };
    }
}

FileInterceptor("file") reads the field name from the multipart form — equivalent to the parameter name in IFormFile file. ParseFilePipe applies validators before your handler runs, similar to model validation with [Required] and custom [FileExtensions] attributes in C#.

By default, Multer buffers files in memory. For large files, configure disk storage:

// TypeScript — Multer disk storage configuration
import { diskStorage } from "multer";
import { extname } from "path";

const multerDiskConfig = {
    storage: diskStorage({
        destination: "/tmp/uploads",
        filename: (_req, file, callback) => {
            const uniqueName = `${Date.now()}-${Math.round(Math.random() * 1e9)}`;
            callback(null, `${uniqueName}${extname(file.originalname)}`);
        },
    }),
};

@UseInterceptors(FileInterceptor("file", multerDiskConfig))

For the NestJS module, register MulterModule to set global defaults:

// TypeScript — uploads.module.ts
import { Module } from "@nestjs/common";
import { MulterModule } from "@nestjs/platform-express";
import { UploadsController } from "./uploads.controller";
import { UploadsService } from "./uploads.service";

@Module({
    imports: [
        MulterModule.register({
            limits: {
                fileSize: 10 * 1024 * 1024, // 10 MB
                files: 5,
            },
        }),
    ],
    controllers: [UploadsController],
    providers: [UploadsService],
})
export class UploadsModule {}

Cloudflare R2: S3-Compatible Storage

Cloudflare R2 is the recommended object storage for this stack. It is S3-compatible (same API surface, same SDK) but has no egress fees and is significantly cheaper at scale. You use the official AWS SDK @aws-sdk/client-s3 — Cloudflare R2 accepts the same requests.

pnpm add @aws-sdk/client-s3 @aws-sdk/s3-request-presigner

Configure the S3 client to point at your R2 endpoint:

// TypeScript — R2 client configuration
// src/storage/r2.client.ts
import { S3Client } from "@aws-sdk/client-s3";

export function createR2Client(): S3Client {
    return new S3Client({
        region: "auto",
        endpoint: `https://${process.env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
        credentials: {
            accessKeyId: process.env.R2_ACCESS_KEY_ID!,
            secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
        },
    });
}

The service that performs the actual upload:

// TypeScript — uploads.service.ts (server-side streaming to R2)
import { Injectable } from "@nestjs/common";
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import { createR2Client } from "../storage/r2.client";
import { Express } from "express";
import { randomUUID } from "crypto";
import { extname } from "path";

@Injectable()
export class UploadsService {
    private readonly s3: S3Client = createR2Client();
    private readonly bucket = process.env.R2_BUCKET_NAME!;
    private readonly publicUrl = process.env.R2_PUBLIC_URL!; // e.g. https://cdn.example.com

    async uploadToStorage(file: Express.Multer.File): Promise<string> {
        const key = `uploads/${randomUUID()}${extname(file.originalname)}`;

        await this.s3.send(
            new PutObjectCommand({
                Bucket: this.bucket,
                Key: key,
                Body: file.buffer,
                ContentType: file.mimetype,
                ContentLength: file.size,
            }),
        );

        return `${this.publicUrl}/${key}`;
    }
}

Pre-Signed URLs: The Preferred Architecture

Routing large uploads through your API server is wasteful. The preferred pattern — equivalent to Azure SAS tokens — is pre-signed URLs: your server generates a short-lived signed URL, hands it to the client, and the client uploads directly to R2. Your API server never touches the file bytes.

sequenceDiagram
    participant B as Browser
    participant A as API Server
    participant R as Cloudflare R2

    B->>A: POST /uploads/presign { contentType, size }
    A->>R: generatePresignedUrl()
    R-->>A: { uploadUrl, key }
    A-->>B: { uploadUrl, key }
    B->>R: PUT uploadUrl (file bytes, directly to R2)
    R-->>B: 200 OK
    B->>A: POST /uploads/confirm { key }
    Note over A: record in DB, process...
    A-->>B: { finalUrl }
// TypeScript — Pre-signed URL generation
// src/uploads/uploads.service.ts (add to existing service)
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { PutObjectCommand } from "@aws-sdk/client-s3";

interface PresignRequest {
    contentType: string;
    sizeBytes: number;
    extension: string;
}

interface PresignResponse {
    uploadUrl: string;
    key: string;
    expiresAt: Date;
}

const ALLOWED_TYPES = new Set(["image/jpeg", "image/png", "image/webp", "application/pdf"]);
const MAX_SIZE_BYTES = 50 * 1024 * 1024; // 50 MB

@Injectable()
export class UploadsService {
    // ...existing code...

    async createPresignedUrl(request: PresignRequest): Promise<PresignResponse> {
        if (!ALLOWED_TYPES.has(request.contentType)) {
            throw new BadRequestException(`Content type ${request.contentType} is not allowed`);
        }

        if (request.sizeBytes > MAX_SIZE_BYTES) {
            throw new BadRequestException(
                `File size ${request.sizeBytes} exceeds maximum of ${MAX_SIZE_BYTES}`,
            );
        }

        const key = `uploads/${randomUUID()}.${request.extension.replace(/^\./, "")}`;
        const expiresIn = 900; // 15 minutes

        const uploadUrl = await getSignedUrl(
            this.s3,
            new PutObjectCommand({
                Bucket: this.bucket,
                Key: key,
                ContentType: request.contentType,
                ContentLength: request.sizeBytes,
            }),
            { expiresIn },
        );

        return {
            uploadUrl,
            key,
            expiresAt: new Date(Date.now() + expiresIn * 1000),
        };
    }
}
// TypeScript — Controller endpoint for pre-signed URL
import { Body, Post } from "@nestjs/common";

@Post("presign")
async presign(
    @Body() body: { contentType: string; sizeBytes: number; extension: string },
) {
    return this.uploadsService.createPresignedUrl(body);
}

@Post("confirm")
async confirm(@Body() body: { key: string }) {
    // Validate the object actually exists in R2, then save to DB
    return this.uploadsService.confirmUpload(body.key);
}

On the frontend, the client-side upload flow:

// TypeScript — Frontend upload using pre-signed URL
// src/lib/upload.ts
interface UploadResult {
    key: string;
    publicUrl: string;
}

export async function uploadFile(file: File): Promise<UploadResult> {
    // Step 1: Get pre-signed URL from your API
    const presignResponse = await fetch("/api/uploads/presign", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({
            contentType: file.type,
            sizeBytes: file.size,
            extension: file.name.split(".").pop() ?? "bin",
        }),
    });

    if (!presignResponse.ok) {
        throw new Error("Failed to get upload URL");
    }

    const { uploadUrl, key } = await presignResponse.json();

    // Step 2: PUT directly to R2 — your API server is not involved
    const uploadResponse = await fetch(uploadUrl, {
        method: "PUT",
        body: file,
        headers: { "Content-Type": file.type },
    });

    if (!uploadResponse.ok) {
        throw new Error("Upload to storage failed");
    }

    // Step 3: Confirm with your API so it can record the upload
    const confirmResponse = await fetch("/api/uploads/confirm", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ key }),
    });

    return confirmResponse.json();
}

Image Processing with Sharp

Sharp is the standard Node.js library for image processing — resizing, format conversion, compression. The C# equivalent is ImageSharp (from SixLabors) or the older System.Drawing. Sharp wraps libvips, which is substantially faster than the GDI+ backend that System.Drawing uses.

pnpm add sharp
pnpm add -D @types/sharp
// TypeScript — Image processing before upload to R2
import sharp from "sharp";

async function processAndUpload(
    file: Express.Multer.File,
    variants: Array<{ width: number; suffix: string }>,
): Promise<Record<string, string>> {
    const results: Record<string, string> = {};
    const baseKey = `images/${randomUUID()}`;

    for (const variant of variants) {
        // Resize and convert to WebP — generally 30-50% smaller than JPEG
        const processed = await sharp(file.buffer)
            .resize(variant.width, null, {
                withoutEnlargement: true,
                fit: "inside",
            })
            .webp({ quality: 85 })
            .toBuffer();

        const key = `${baseKey}-${variant.suffix}.webp`;

        await this.s3.send(
            new PutObjectCommand({
                Bucket: this.bucket,
                Key: key,
                Body: processed,
                ContentType: "image/webp",
            }),
        );

        results[variant.suffix] = `${this.publicUrl}/${key}`;
    }

    return results;
}

// Usage — generate thumbnail, medium, and full-size variants
const urls = await processAndUpload(file, [
    { width: 200, suffix: "thumb" },
    { width: 800, suffix: "medium" },
    { width: 1920, suffix: "full" },
]);

Key Differences

ConceptC# / AzureNode.js / Cloudflare R2
Multipart form bindingIFormFile built into ASP.NETMulter middleware (FileInterceptor in NestJS)
File validation[FileExtensions], [MaxFileSize] attributesParseFilePipe with MaxFileSizeValidator, FileTypeValidator
Blob storage SDKAzure.Storage.Blobs (Azure-specific)@aws-sdk/client-s3 (works with R2, S3, MinIO, etc.)
Direct client uploadAzure SAS tokensS3 pre-signed URLs — same concept, different name
Pre-signed URL libBlobClient.GenerateSasUri()@aws-sdk/s3-request-presigner getSignedUrl()
Image processingSixLabors.ImageSharpsharp (wraps libvips, very fast)
Storage cost modelPay for egressR2 has no egress fees — significant savings at scale
Content validationMIME type from IFormFile.ContentTypefile.mimetype from Multer — also check magic bytes for security
Streaming large filesOpenReadStream() on IFormFileMulter memoryStorage (buffer) or diskStorage (temp file)

Gotchas for .NET Engineers

Gotcha 1: MIME Type Validation is Not Enough — Check Magic Bytes

FileTypeValidator in NestJS and the ContentType field in IFormFile both rely on what the client reports. A malicious client can rename exploit.exe to photo.jpg and send image/jpeg as the content type. The only reliable validation is checking the file’s magic bytes — the first few bytes that identify the actual format.

Use the file-type package for this:

pnpm add file-type
// TypeScript — magic byte validation in the upload service
import { fromBuffer } from "file-type";

async function validateMagicBytes(buffer: Buffer, declaredMimeType: string): Promise<void> {
    const detected = await fromBuffer(buffer);

    if (!detected) {
        throw new BadRequestException("Could not determine file type from content");
    }

    const allowed = new Set(["image/jpeg", "image/png", "image/webp"]);

    if (!allowed.has(detected.mime)) {
        throw new BadRequestException(`File content is ${detected.mime}, not an allowed image type`);
    }

    if (detected.mime !== declaredMimeType) {
        throw new BadRequestException(
            `Declared type ${declaredMimeType} does not match actual content ${detected.mime}`,
        );
    }
}

In .NET, the System.Net.Mime.ContentType class parses MIME types but does not inspect content. The same vulnerability exists. This is not a Node.js-specific problem, but it is frequently forgotten in both ecosystems.

Gotcha 2: Multer Memory Storage Will OOM Your Server on Large Files

The default Multer configuration in NestJS uses memoryStorage, which buffers the entire file in memory before your handler runs. If five users simultaneously upload 50 MB files, you have 250 MB of file data sitting in RAM — before your application memory. Under concurrent load, this causes out-of-memory crashes.

For files larger than about 5 MB, use disk storage or, better yet, the pre-signed URL pattern so files never hit your server at all:

// TypeScript — always set explicit memory limits for in-memory uploads
MulterModule.register({
    storage: memoryStorage(),
    limits: {
        fileSize: 5 * 1024 * 1024, // Enforce a hard limit — 5 MB for in-memory
        files: 1,
    },
}),

If you need server-side processing of larger files (Sharp pipelines, virus scanning), use disk storage and stream the file from disk rather than loading it into memory.

Gotcha 3: Pre-Signed URLs Require CORS Configuration on Your Bucket

When a browser makes a PUT request directly to Cloudflare R2 using a pre-signed URL, the browser sends a CORS preflight request (OPTIONS). If your R2 bucket does not have a CORS policy allowing PUT from your domain, every upload will fail with a CORS error in the browser — and the error will look like a network failure, not a configuration problem.

Configure CORS on your R2 bucket via the Cloudflare dashboard or Wrangler:

// R2 CORS policy (JSON format in Cloudflare dashboard)
[
  {
    "AllowedOrigins": ["https://app.example.com"],
    "AllowedMethods": ["PUT", "GET", "HEAD"],
    "AllowedHeaders": ["Content-Type", "Content-Length"],
    "ExposeHeaders": ["ETag"],
    "MaxAgeSeconds": 3600
  }
]

In Azure, the equivalent is the CORS configuration on the Storage Account. This is easy to overlook in development (where you may use wildcard origins) and painful to debug in production.

Gotcha 4: Sharp is a Native Module — CI and Docker Require Architecture Matching

Sharp bundles precompiled libvips binaries for specific CPU architectures and Node.js versions. If you build your Docker image on an Apple Silicon Mac (arm64) and deploy to a Linux amd64 server, Sharp will fail at startup with a cryptic binary compatibility error.

Always build Docker images for the target architecture:

# Dockerfile — specify platform explicitly
FROM --platform=linux/amd64 node:22-alpine AS base

Or in your build command:

docker build --platform linux/amd64 -t my-app .

In CI (GitHub Actions), always specify runs-on: ubuntu-latest for build jobs that include Sharp. This is not a problem with AWS Lambda or Cloudflare Workers since they have defined architectures, but it is a common local-to-cloud gap.

Gotcha 5: The Confirm Step Is Not Optional

With pre-signed URL uploads, there is a window between “client received a presigned URL” and “upload completed.” If you skip the /confirm endpoint and instead assume the upload happened because you generated the URL, you will have dangling references in your database pointing to keys that were never uploaded — either because the upload failed, or because the user closed the tab, or because a malicious client never uploaded at all.

The confirm step should:

  1. Verify the object actually exists in R2 (HeadObjectCommand)
  2. Validate that the key matches what was issued in the presign step (prevent key substitution)
  3. Record the upload in your database and return the canonical public URL

Hands-On Exercise

Build a profile picture upload flow using pre-signed URLs and Sharp for image processing. This exercise covers the full lifecycle: presign, upload, confirm, and serve.

Prerequisites: A running NestJS project with a UsersModule and Cloudflare R2 credentials in your .env.

Step 1 — Environment setup

Add to .env:

R2_ACCOUNT_ID=your_account_id
R2_ACCESS_KEY_ID=your_access_key
R2_SECRET_ACCESS_KEY=your_secret_key
R2_BUCKET_NAME=your-bucket
R2_PUBLIC_URL=https://cdn.example.com

Step 2 — Create the UploadsModule

// src/uploads/uploads.module.ts
import { Module } from "@nestjs/common";
import { UploadsController } from "./uploads.controller";
import { UploadsService } from "./uploads.service";

@Module({
    controllers: [UploadsController],
    providers: [UploadsService],
    exports: [UploadsService],
})
export class UploadsModule {}

Step 3 — Implement the full service

Implement UploadsService with three methods:

  • createPresignedUrl(contentType, sizeBytes, extension) — validates input, returns a signed PUT URL
  • processAvatarUpload(key) — downloads the raw upload from R2, runs it through Sharp (200x200 WebP thumbnail), re-uploads the processed version, returns the public URL
  • confirmUpload(rawKey) — verifies the object exists, triggers processAvatarUpload, returns the final URL

Step 4 — Add magic byte validation

In createPresignedUrl, add magic byte validation as shown in the Gotchas section. Since the client only sends metadata at this point (not the file), add the magic byte check in the confirm step instead — download the first 16 bytes of the raw upload using GetObjectCommand with a Range: bytes=0-15 header.

Step 5 — Wire up the confirm endpoint

The confirm endpoint should call confirmUpload, get the processed URL, and store it in the user’s profile record via the UsersService.

Step 6 — Test the full flow

# Step 1: Get presigned URL
curl -X POST http://localhost:3000/uploads/presign \
  -H "Content-Type: application/json" \
  -d '{"contentType":"image/jpeg","sizeBytes":204800,"extension":"jpg"}'

# Step 2: Upload directly to R2 using the URL from step 1
curl -X PUT "PRESIGNED_URL_HERE" \
  -H "Content-Type: image/jpeg" \
  --data-binary @test-photo.jpg

# Step 3: Confirm the upload
curl -X POST http://localhost:3000/uploads/confirm \
  -H "Content-Type: application/json" \
  -d '{"key":"uploads/abc123.jpg"}'

Verify you get back a processed WebP URL pointing to a 200x200 thumbnail.


Quick Reference

.NET / Azure ConceptNode.js / R2 EquivalentNotes
IFormFileExpress.Multer.FileMulter provides the same metadata
IFormFile.ContentTypefile.mimetypeDon’t trust either — validate magic bytes
IFormFile.Lengthfile.sizeSame semantics
IFormFile.OpenReadStream()file.buffer (memoryStorage) or file.path (diskStorage)Choose based on file size
[RequestSizeLimit] attributeMulterModule limits.fileSizeSet at module level or interceptor
Azure.Storage.Blobs.BlobClient@aws-sdk/client-s3 S3ClientR2 uses the same S3 SDK
BlobClient.UploadAsync(stream)PutObjectCommandPass Body: buffer or a Node.js Readable
BlobClient.GenerateSasUri()getSignedUrl() from @aws-sdk/s3-request-presignerSame 15-minute expiry is common
Azure CORS on Storage AccountR2 CORS policy in Cloudflare dashboardRequired for direct browser uploads
SixLabors.ImageSharpsharpSharp wraps libvips — fast, native binary
image.Resize(width, height)sharp(buffer).resize(width, height).toBuffer()Sharp supports fit modes, aspect ratio preservation
image.SaveAsWebpAsync().webp({ quality: 85 })WebP is the default modern format
Azure CDN serving blobsR2 custom domain / Cloudflare CDNR2 has zero egress fees to Cloudflare CDN

Common Multer storage choices:

ScenarioStorageReason
Files < 5 MB, processed in-memorymemoryStorage()Simplest; no temp file cleanup
Files > 5 MB, server-side processingdiskStorage()Avoids RAM pressure
Files any size, no server processingPre-signed URLPreferred; API never touches bytes
Virus scanning requireddiskStorage() + scanMust land on disk for most scanners

Further Reading

4.9 — Logging & Observability: Serilog to Pino + Sentry

For .NET engineers who know: Serilog, ILogger<T>, structured logging enrichers, Application Insights You’ll learn: How Pino provides structured logging in NestJS, how request ID correlation works in Node.js, and how Sentry covers the error tracking and performance monitoring role that Application Insights plays in .NET Time: 15-20 minutes


The .NET Way (What You Already Know)

Serilog is the standard structured logger for .NET. You inject ILogger<T> everywhere, configure sinks and enrichers once at startup, and emit structured log events that carry properties beyond the message string.

// C# — Serilog configuration in Program.cs
Log.Logger = new LoggerConfiguration()
    .MinimumLevel.Information()
    .MinimumLevel.Override("Microsoft", LogEventLevel.Warning)
    .Enrich.FromLogContext()
    .Enrich.WithCorrelationId()           // from Serilog.Enrichers.CorrelationId
    .Enrich.WithMachineName()
    .Enrich.WithEnvironmentName()
    .WriteTo.Console(new JsonFormatter())  // structured JSON to stdout
    .WriteTo.ApplicationInsights(          // push to Azure
        connectionString,
        TelemetryConverter.Traces)
    .CreateLogger();

builder.Host.UseSerilog();
// C# — usage throughout the application
public class OrdersService
{
    private readonly ILogger<OrdersService> _logger;

    public OrdersService(ILogger<OrdersService> logger)
    {
        _logger = logger;
    }

    public async Task<Order> PlaceOrderAsync(PlaceOrderCommand command)
    {
        _logger.LogInformation(
            "Placing order for customer {CustomerId} with {ItemCount} items",
            command.CustomerId,
            command.Items.Count);

        // ... business logic ...

        _logger.LogInformation(
            "Order {OrderId} placed successfully in {ElapsedMs}ms",
            order.Id,
            elapsed.TotalMilliseconds);

        return order;
    }
}

The key properties of Serilog that you rely on day-to-day:

  • Structured events{CustomerId} becomes a queryable property, not just a substring in a message
  • Context enrichmentLogContext.PushProperty() adds properties to all subsequent events in a scope
  • Request ID correlation — every log line for a request shares a RequestId property, making filtering in Application Insights trivial
  • Minimum level overrides — suppress Microsoft.* noise while keeping your own code at Debug

The Node.js equivalent covers all of this, with a different set of trade-offs.


The Node.js Way

NestJS Built-In Logger

NestJS ships with a Logger class that you can use immediately without configuration:

// TypeScript — NestJS built-in Logger
import { Injectable, Logger } from "@nestjs/common";

@Injectable()
export class OrdersService {
    private readonly logger = new Logger(OrdersService.name);

    async placeOrder(command: PlaceOrderCommand): Promise<Order> {
        this.logger.log(`Placing order for customer ${command.customerId}`);
        this.logger.warn(`Low inventory for SKU ${command.items[0]?.sku}`);
        this.logger.error("Order placement failed", error.stack);
        return order;
    }
}

The built-in logger writes colored, human-readable output to stdout. It is useful for local development. For production, you replace it with Pino — a structured logger that writes JSON, runs substantially faster, and integrates with log aggregation systems.

Pino: The Serilog of Node.js

Pino is the standard structured logger for production Node.js and NestJS. It is designed around a single performance constraint: logging should be fast enough to do on every request without measurable overhead. Internally, it serializes JSON in a worker thread (via pino-worker in older versions, natively since Pino 8) and uses a fast JSON serializer.

pnpm add pino pino-http nestjs-pino
pnpm add -D pino-pretty  # dev-only: human-readable output

nestjs-pino is the NestJS integration that replaces the built-in Logger with Pino and provides the @InjectPinoLogger() decorator:

// TypeScript — Pino configuration in AppModule
// src/app.module.ts
import { Module } from "@nestjs/common";
import { LoggerModule } from "nestjs-pino";
import { randomUUID } from "crypto";
import { Request, Response } from "express";

@Module({
    imports: [
        LoggerModule.forRoot({
            pinoHttp: {
                // Use pretty-print in development, JSON in production
                ...(process.env.NODE_ENV !== "production"
                    ? { transport: { target: "pino-pretty", options: { colorize: true } } }
                    : {}),

                // Log level: map environment to Pino levels
                level: process.env.LOG_LEVEL ?? "info",

                // Request ID generation — equivalent to Serilog's RequestId enricher
                genReqId: (req: Request, res: Response): string => {
                    // Honor forwarded request IDs from a load balancer or API gateway
                    const existing = req.headers["x-request-id"];
                    if (typeof existing === "string" && existing) return existing;
                    const id = randomUUID();
                    res.setHeader("x-request-id", id);
                    return id;
                },

                // Customize what gets logged per request
                customReceivedMessage: (req: Request) =>
                    `Incoming ${req.method} ${req.url}`,

                customSuccessMessage: (req: Request, res: Response, responseTime: number) =>
                    `${req.method} ${req.url} ${res.statusCode} — ${responseTime}ms`,

                // Redact sensitive fields from logs — equivalent to Serilog destructuring policies
                redact: {
                    paths: ["req.headers.authorization", "req.body.password", "*.creditCardNumber"],
                    censor: "[REDACTED]",
                },

                // Serializers: control how objects appear in log output
                serializers: {
                    req(req: Request) {
                        return {
                            id: req.id,
                            method: req.method,
                            url: req.url,
                            query: req.query,
                            // Do not log body by default — may contain PII
                        };
                    },
                    res(res: Response) {
                        return { statusCode: res.statusCode };
                    },
                },
            },
        }),
    ],
})
export class AppModule {}

Then tell NestJS to use the Pino logger globally:

// TypeScript — main.ts
import { NestFactory } from "@nestjs/core";
import { Logger } from "nestjs-pino";
import { AppModule } from "./app.module";

async function bootstrap(): Promise<void> {
    const app = await NestFactory.create(AppModule, { bufferLogs: true });
    app.useLogger(app.get(Logger));
    await app.listen(3000);
}
bootstrap();

bufferLogs: true holds startup log messages until Pino is initialized, so you don’t lose the first few lines of output to the default logger.

Using the Pino Logger in Services

// TypeScript — injecting and using Pino in a service
import { Injectable } from "@nestjs/common";
import { InjectPinoLogger, PinoLogger } from "nestjs-pino";

@Injectable()
export class OrdersService {
    constructor(
        @InjectPinoLogger(OrdersService.name)
        private readonly logger: PinoLogger,
    ) {}

    async placeOrder(command: PlaceOrderCommand): Promise<Order> {
        // Structured log — properties are queryable fields, not substrings
        this.logger.info(
            {
                customerId: command.customerId,
                itemCount: command.items.length,
            },
            "Placing order",
        );

        const start = Date.now();
        const order = await this.processOrder(command);

        this.logger.info(
            {
                orderId: order.id,
                customerId: command.customerId,
                elapsedMs: Date.now() - start,
            },
            "Order placed successfully",
        );

        return order;
    }
}

Compare the Pino call signature with Serilog:

// Serilog
_logger.LogInformation("Order {OrderId} placed in {ElapsedMs}ms", order.Id, elapsed.TotalMilliseconds);

// Pino
this.logger.info({ orderId: order.id, elapsedMs }, "Order placed successfully");

The structural intent is identical: you want orderId and elapsedMs as discrete, filterable fields in your log store. Pino’s signature puts the properties object first and the message string second. The emitted JSON looks like:

{
  "level": 30,
  "time": 1739875200000,
  "pid": 1,
  "hostname": "web-1",
  "reqId": "9f3a2b1c-4d5e-6f7a-8b9c-0d1e2f3a4b5c",
  "context": "OrdersService",
  "orderId": "ord_abc123",
  "customerId": "usr_xyz789",
  "elapsedMs": 47,
  "msg": "Order placed successfully"
}

Request ID Correlation

Serilog’s RequestId enricher adds the ASP.NET Core request ID to every log event within a request pipeline. Pino achieves the same result through pino-http: the request ID generated by genReqId is automatically attached to every log event that occurs during that request, because nestjs-pino binds the Pino child logger to the request context via AsyncLocalStorage.

This means you do not need to pass a request ID parameter through your service methods. Any this.logger.info(...) call inside a service — however deep in the call stack — automatically carries the reqId from the originating HTTP request:

// TypeScript — correlation is automatic, no parameter threading needed

// Controller receives request with reqId "abc-123"
@Get(":id")
async findOne(@Param("id") id: string) {
    return this.ordersService.findOne(id); // No need to pass request ID
}

// Service logs automatically include reqId "abc-123"
@Injectable()
export class OrdersService {
    async findOne(id: string): Promise<Order> {
        this.logger.info({ orderId: id }, "Fetching order"); // reqId is there automatically
        return this.repo.findById(id);
    }
}

This is implemented via Node.js AsyncLocalStorage — the equivalent of C#’s AsyncLocal<T> or the HTTP request scope in .NET’s DI container. nestjs-pino manages it for you.

Pino Log Levels

Pino uses numeric levels that map directly to Serilog’s semantic levels:

Serilog LevelPino LevelNumeric ValueUse Case
Verbosetrace10Fine-grained debugging, normally off in production
Debugdebug20Development debugging
Informationinfo30Normal operational events
Warningwarn40Degraded state, recoverable
Errorerror50Errors requiring attention
Fatalfatal60Unrecoverable errors, process exit imminent

Set the minimum level via LOG_LEVEL environment variable. Unlike Serilog, Pino does not support per-namespace minimum level overrides natively — for that, use the pino-loki or a similar transport that filters at the destination, or suppress noisy library logging at the serializer level.

Structured Logging Patterns

Three patterns worth establishing as conventions across the codebase:

1. Error logging — always include the error object:

// TypeScript — correct error logging
try {
    await this.processPayment(order);
} catch (err) {
    // Pass the error as a structured field, not as a string
    // Pino's error serializer extracts message, stack, type
    this.logger.error({ err, orderId: order.id }, "Payment processing failed");
}

Pino has a built-in error serializer that extracts message, type, and stack from an Error object into structured fields. If you log err.message as a string, you lose the stack trace. Always pass the Error object under the err key.

2. Duration tracking — log elapsed time as a number:

// TypeScript — duration as a number, not a string
const start = performance.now();
await externalService.call();
const durationMs = Math.round(performance.now() - start);

this.logger.info({ durationMs, service: "payment-gateway" }, "External call completed");

Log durations as milliseconds (integer). When filtered in a log aggregator like Grafana Loki or Datadog, numeric fields can be aggregated and graphed — avg(durationMs) by service — whereas "took 47ms" is just a string.

3. Business context — always include the domain entity ID:

// TypeScript — include entity IDs in every log
this.logger.info({ userId: user.id, action: "password_reset_requested" }, "User action");
this.logger.warn({ orderId: order.id, reason: "high_value" }, "Order flagged for review");

Every log event for an operation should carry the primary entity ID. This lets you filter all logs for a specific order or user when debugging a customer complaint — the same reason you put {OrderId} in every Serilog message template.

Log Aggregation on Render

On Render, all stdout output from your service is collected and available in the dashboard. For structured JSON logs, Render’s log viewer does basic filtering. For serious log querying, ship logs to a dedicated aggregator.

The most common lightweight setup for teams already using Sentry is to skip a dedicated log aggregator for most cases and rely on Sentry for error-level events. For info/warn logs, Render’s built-in retention (7 days on free, configurable on paid) is often sufficient for early-stage products.

For production-grade log aggregation, the two common choices are:

  • Grafana Loki — pairs with Grafana dashboards, cheap, label-based querying
  • Datadog — expensive, but Application Insights parity if your organization already uses it

Configure a Pino transport to ship to either:

pnpm add pino-loki  # for Grafana Loki
// TypeScript — Pino transport for Grafana Loki (production)
transport: {
    targets: [
        {
            target: "pino-loki",
            options: {
                host: process.env.LOKI_URL,
                labels: {
                    app: "my-api",
                    environment: process.env.NODE_ENV,
                },
                basicAuth: {
                    username: process.env.LOKI_USER,
                    password: process.env.LOKI_PASSWORD,
                },
            },
            level: "info",
        },
    ],
},

Sentry: Application Insights for the Node.js Stack

Application Insights does two things well: error tracking and performance monitoring (APM). Sentry covers both roles in the Node.js stack. The mental model is a direct substitution.

Application Insights FeatureSentry Equivalent
Exception tracking with stack tracesSentry.captureException()
Custom events / telemetrySentry.captureMessage(), custom breadcrumbs
Performance monitoring (requests, dependencies)Sentry Performance — automatic for HTTP, DB, queues
User context on errorsSentry.setUser()
Request context on errorsSentry.setContext()
ITelemetryProcessor (filter noise)beforeSend callback
Release trackingrelease option — tag errors to a deploy
Breadcrumbs (event timeline)Sentry breadcrumbs (automatic for HTTP and console)
Alert rulesSentry issue alerts and metric alerts

NestJS + Sentry Setup

pnpm add @sentry/node @sentry/profiling-node

Initialize Sentry before anything else in main.ts — before NestJS creates the application:

// TypeScript — main.ts: Sentry must be initialized first
import "./instrument"; // import before all other modules
import { NestFactory } from "@nestjs/core";
import { Logger } from "nestjs-pino";
import { AppModule } from "./app.module";

async function bootstrap(): Promise<void> {
    const app = await NestFactory.create(AppModule, { bufferLogs: true });
    app.useLogger(app.get(Logger));
    await app.listen(3000);
}
bootstrap();
// TypeScript — src/instrument.ts (Sentry initialization)
import * as Sentry from "@sentry/node";
import { nodeProfilingIntegration } from "@sentry/profiling-node";

Sentry.init({
    dsn: process.env.SENTRY_DSN,

    environment: process.env.NODE_ENV ?? "development",

    // Tag errors with the current git SHA or version
    release: process.env.SENTRY_RELEASE ?? process.env.GIT_SHA,

    integrations: [
        nodeProfilingIntegration(),
        // Automatic instrumentation for HTTP, pg, Redis, etc.
        // Sentry auto-detects installed packages
    ],

    // Sample 10% of transactions for performance monitoring in production
    // Capture all in development
    tracesSampleRate: process.env.NODE_ENV === "production" ? 0.1 : 1.0,

    profilesSampleRate: 1.0, // Profile the sampled transactions

    // Filter events before they are sent — equivalent to ITelemetryProcessor
    beforeSend(event, hint) {
        // Do not send 4xx errors to Sentry — they are expected application behavior
        const exception = hint?.originalException;
        if (exception instanceof Error) {
            const statusCode = (exception as any).status ?? (exception as any).statusCode;
            if (typeof statusCode === "number" && statusCode >= 400 && statusCode < 500) {
                return null; // Drop this event
            }
        }
        return event;
    },
});

Capturing Errors with Context

In your NestJS global exception filter (Article 1.8), add Sentry capture for 5xx errors:

// TypeScript — GlobalExceptionFilter with Sentry + Pino integration
import * as Sentry from "@sentry/node";
import { ExceptionFilter, Catch, ArgumentsHost, HttpException, HttpStatus } from "@nestjs/common";
import { InjectPinoLogger, PinoLogger } from "nestjs-pino";
import { Request, Response } from "express";

@Catch()
export class GlobalExceptionFilter implements ExceptionFilter {
    constructor(
        @InjectPinoLogger(GlobalExceptionFilter.name)
        private readonly logger: PinoLogger,
    ) {}

    catch(exception: unknown, host: ArgumentsHost): void {
        const ctx = host.switchToHttp();
        const request = ctx.getRequest<Request>();
        const response = ctx.getResponse<Response>();

        const status = this.resolveStatus(exception);
        const message = this.resolveMessage(exception);

        if (status >= 500) {
            // Log via Pino — will include the reqId correlation
            this.logger.error(
                { err: exception, path: request.url, method: request.method },
                "Unhandled exception",
            );

            // Send to Sentry with request context
            Sentry.withScope((scope) => {
                scope.setTag("endpoint", `${request.method} ${request.route?.path ?? request.url}`);
                scope.setContext("request", {
                    method: request.method,
                    url: request.url,
                    query: request.query,
                    requestId: request.headers["x-request-id"],
                });

                // If you have auth middleware that attaches user to request:
                const user = (request as any).user;
                if (user) {
                    scope.setUser({ id: user.id, email: user.email });
                }

                Sentry.captureException(exception);
            });
        }

        response.status(status).json({ statusCode: status, message });
    }

    private resolveStatus(exception: unknown): number {
        if (exception instanceof HttpException) return exception.getStatus();
        return HttpStatus.INTERNAL_SERVER_ERROR;
    }

    private resolveMessage(exception: unknown): string {
        if (exception instanceof HttpException) return exception.message;
        return "An unexpected error occurred.";
    }
}

Breadcrumbs are Sentry’s equivalent of Application Insights’ dependency tracking and custom events. Sentry automatically adds breadcrumbs for outbound HTTP requests (if you use node-fetch or axios), console log output, and database queries (if you use a supported ORM). You can also add manual breadcrumbs:

// TypeScript — manual breadcrumbs for business events
Sentry.addBreadcrumb({
    category: "payment",
    message: "Payment gateway called",
    data: { orderId, amount, currency, gateway: "stripe" },
    level: "info",
});

// When an error occurs, Sentry shows the timeline leading up to it
// including all breadcrumbs, so you can see exactly what happened

This is the equivalent of TelemetryClient.TrackDependency() or TelemetryClient.TrackEvent() in the Application Insights SDK.

User Context

Application Insights correlates telemetry with user sessions automatically when using the JavaScript SDK. In Sentry, set user context explicitly after authentication:

// TypeScript — set Sentry user context after auth middleware resolves
// src/common/middleware/sentry-user.middleware.ts
import { Injectable, NestMiddleware } from "@nestjs/common";
import * as Sentry from "@sentry/node";
import { Request, Response, NextFunction } from "express";

@Injectable()
export class SentryUserMiddleware implements NestMiddleware {
    use(req: Request, _res: Response, next: NextFunction): void {
        // Called after auth middleware has attached user to request
        const user = (req as any).user;
        if (user) {
            Sentry.setUser({
                id: user.id,
                email: user.email,
                username: user.username,
            });
        }
        next();
    }
}

Once set, every Sentry event captured during that request will include the user context, making it possible to look up “all errors this specific user encountered” — the same search you would run in Application Insights against user_Id.

Performance Monitoring: Distributed Tracing

Sentry’s performance monitoring creates a distributed trace across all services that participate in a request. If your NestJS API calls an external HTTP service, and both have Sentry initialized, Sentry links the traces using the sentry-trace header.

For custom operations you want to measure, wrap them in a span:

// TypeScript — manual performance span
import * as Sentry from "@sentry/node";

async function processLargeExport(exportId: string): Promise<void> {
    return Sentry.startSpan(
        {
            op: "export.process",
            name: "Process Large Export",
            attributes: { exportId, exportType: "csv" },
        },
        async (span) => {
            const rows = await this.fetchExportRows(exportId);
            span.setAttribute("rowCount", rows.length);

            await this.writeToStorage(rows);
        },
    );
}

This is equivalent to TelemetryClient.StartOperation() in the Application Insights SDK, which creates a custom dependency entry in the application map.


Key Differences

ConceptSerilog / Application InsightsPino / Sentry
Logger injectionILogger<T> via DI@InjectPinoLogger(ClassName.name) or new Logger(ClassName.name)
Structured properties{PropertyName} in message templateFirst argument object: { propertyName: value }
Request ID correlationRequestId enricher via LogContextpino-http + AsyncLocalStorage — automatic
Global log configurationLoggerConfiguration in Program.csLoggerModule.forRoot() in AppModule
Minimum level overrides.MinimumLevel.Override("Microsoft", Warning)Set at transport level — no per-namespace override in Pino
Log sinksSerilog sinks (Console, File, Seq, App Insights)Pino transports (console, pino-loki, pino-datadog)
RedactionDestructure.ByTransforming<T>()redact option with JSON path patterns
Error trackingApplication Insights exception telemetrySentry.captureException()
Performance monitoringApplication Insights APM, dependency trackingSentry Performance, distributed tracing
User context on errorsApplication Insights user correlation via JS SDKSentry.setUser() — explicit
SamplingApplication Insights adaptive samplingtracesSampleRate in Sentry.init()
Event filteringITelemetryProcessorbeforeSend callback
Error groupingApplication Insights exception groupingSentry issue fingerprinting
Release trackingcloud_RoleInstance, deployment annotationsrelease field in Sentry.init()
Local development outputConsole sink, human-readablepino-pretty transport

Gotchas for .NET Engineers

Gotcha 1: Pino’s Log Format Is { props } message, Not a Message Template

In Serilog, the message template is the primary unit — "Order {OrderId} placed" — and properties are named slots in that template. Pino reverses this: the object comes first and the message is a plain string with no property substitution.

// WRONG — trying to use Serilog-style message templates in Pino
this.logger.info("Order %s placed in %dms", order.id, elapsedMs);
// Pino supports printf-style interpolation but it defeats structured logging
// The values end up embedded in the message string, not as separate fields

// CORRECT — properties in the object, message is a plain label
this.logger.info({ orderId: order.id, elapsedMs }, "Order placed");

The output JSON from the wrong approach has "msg": "Order ord_abc123 placed in 47ms" — a string you cannot query on. The correct approach has "orderId": "ord_abc123" and "elapsedMs": 47 as separate fields you can filter, group, and aggregate.

If you are migrating a codebase from Serilog templates to Pino and you see message strings with embedded values everywhere, that is technical debt — not equivalent behavior.

Gotcha 2: Sentry beforeSend Must Be Synchronous — No await Inside It

The beforeSend callback is called synchronously by Sentry before it sends an event. If you put async logic inside it (querying a database to enrich the event, for example), the callback will return a Promise instead of the event or null. Sentry does not await that Promise — it will either treat it as a truthy event (unexpected behavior) or discard it.

// WRONG — async beforeSend
beforeSend: async (event, hint) => {
    const extra = await this.lookupAdditionalContext(hint.originalException);
    event.extra = { ...event.extra, ...extra };
    return event; // This is the resolved value, but beforeSend has already returned the Promise
},

// CORRECT — synchronous only; do enrichment in Sentry scopes at the capture site
beforeSend: (event, hint) => {
    const exception = hint?.originalException;
    if (exception instanceof HttpException && exception.getStatus() < 500) {
        return null; // Drop 4xx errors
    }
    return event;
},

For async enrichment, use Sentry.withScope() at the capture site and set context there with synchronous data that is already in scope.

Gotcha 3: pino-pretty in Production Destroys Your Log Aggregation

pino-pretty is a development tool that formats Pino’s JSON output as human-readable colored text. If it ends up in your production configuration — even accidentally through a shared config or a forgotten NODE_ENV check — every log line becomes an unstructured string. Your log aggregation tool cannot parse it. You lose all queryable fields.

The guard is straightforward:

// TypeScript — explicit environment check, always
transport: process.env.NODE_ENV !== "production"
    ? { target: "pino-pretty", options: { colorize: true } }
    : undefined,

Set NODE_ENV=production in your Render or Docker environment. Do not rely on it being set by the deploy platform — set it explicitly.

Gotcha 4: Application Insights Tracks More Automatically Than Sentry

Application Insights’ Node.js SDK (and the Azure Monitor OpenTelemetry distro) automatically instruments http, https, pg, mysql, redis, mongodb, and several other modules. Sentry does the same for the modules it explicitly supports, but the list is shorter and the level of automatic detail differs.

Specifically: Application Insights records every outbound HTTP request as a dependency with timing, status code, and URL. Sentry records outbound HTTP requests as breadcrumbs in the event timeline, but does not create standalone transactions for them unless they are triggered within a sampled transaction.

If your team relies on the Application Insights application map (the visual graph of service dependencies), you will need to set up OpenTelemetry separately if that level of detail matters. Sentry’s performance monitoring is excellent for identifying slow endpoints and tracing user-initiated flows, but it is not a drop-in replacement for the Application Insights application map.

Gotcha 5: Log Levels in NestJS Bootstrap vs. Application Logs

NestJS emits its own internal logs (module registration, dependency resolution, route mapping) during startup. With nestjs-pino, these go through Pino. If your LOG_LEVEL is set to warn or higher, you will suppress NestJS’s startup [NestFactory] and [RoutesResolver] info logs entirely. This is usually what you want in production, but it can cause confusion when debugging startup issues.

During production incident response, temporarily lower the log level:

# Render: set LOG_LEVEL environment variable
LOG_LEVEL=debug

NestJS also has a separate logger option in NestFactory.create() that controls which NestJS internal log categories appear. Do not confuse this with Pino’s level setting — they are separate configuration points.


Hands-On Exercise

Instrument a NestJS API with Pino structured logging, request ID correlation, and Sentry error tracking. By the end, every request should produce a correlated log trail and any unhandled error should appear in Sentry with user context and a clean stack trace.

Prerequisites: A NestJS project with at least one controller and service, and a Sentry account with a Node.js project DSN.

Step 1 — Install dependencies

pnpm add pino pino-http nestjs-pino @sentry/node @sentry/profiling-node
pnpm add -D pino-pretty

Step 2 — Create src/instrument.ts

Initialize Sentry with your DSN, set tracesSampleRate: 1.0 for development, and add a beforeSend that drops 4xx errors.

Step 3 — Import instrument.ts as the first line of main.ts

Verify Sentry is initialized before NestJS bootstrap. Check the Sentry dashboard to confirm the client connected (look for a “checkin” event on startup).

Step 4 — Configure Pino in AppModule

Add LoggerModule.forRoot() with a genReqId that honors x-request-id headers. Configure pino-pretty for development. Add redact paths for req.headers.authorization and req.body.password.

Step 5 — Replace the built-in logger in main.ts

Call app.useLogger(app.get(Logger)) after Pino is initialized. Confirm NestJS startup logs now appear in JSON format in production mode or colored in development.

Step 6 — Inject PinoLogger into a service

Replace any console.log or new Logger() usage with @InjectPinoLogger(ServiceName.name). Emit at least one info log with a structured properties object.

Step 7 — Wire Sentry into the global exception filter

In your GlobalExceptionFilter, add Sentry.captureException() for 5xx errors. Use Sentry.withScope() to attach the request path and user context.

Step 8 — Test correlation

Make an API call and grep the logs for the reqId value. Verify that the request log, any service logs, and any error logs all carry the same reqId. Then trigger a 500 error (throw from a service) and verify it appears in Sentry with a readable stack trace.

Stretch goal: Add a Sentry performance span around one slow operation in your service and verify it appears in the Sentry Performance dashboard with correct timing.


Quick Reference

Serilog / App Insights ConceptPino / Sentry EquivalentNotes
ILogger<T> injection@InjectPinoLogger(ClassName.name)From nestjs-pino
_logger.LogInformation(template, args)this.logger.info({ props }, "message")Properties first, message second
_logger.LogError(ex, template, args)this.logger.error({ err }, "message")Pass Error object as err field
LogContext.PushProperty() enrichmentAsyncLocalStorage via pino-httpAutomatic for request context
RequestId enrichergenReqId in pinoHttp configAuto-attached to all logs in the request
MinimumLevel.Override("X", Warning)No per-namespace override in PinoSet at transport or filter downstream
WriteTo.Console(new JsonFormatter())Pino default output is JSONUse pino-pretty for dev only
WriteTo.ApplicationInsights()pino-loki, pino-datadog transportsOr skip and rely on Sentry
Destructure.ByTransforming<T>()serializers option in pinoHttpPer-key transform functions
Enrich.WithMachineName()pid, hostname in Pino output — automaticPino includes these by default
TelemetryClient.TrackException()Sentry.captureException(err)Direct equivalent
TelemetryClient.TrackEvent()Sentry.captureMessage() + breadcrumbsBreadcrumbs for timeline events
TelemetryClient.TrackDependency()Sentry.startSpan()Manual spans for custom operations
TelemetryClient.TrackRequest()Automatic via Sentry HTTP integrationNo manual call needed
ITelemetryProcessorbeforeSend in Sentry.init()Synchronous only
User context (AuthenticatedUserTelemetryInitializer)Sentry.setUser({ id, email })Call after auth resolves
Application Insights application mapNo direct Sentry equivalentUse OpenTelemetry + Tempo/Jaeger
release annotation in App Insightsrelease: process.env.GIT_SHA in Sentry.init()Links errors to specific deploys
SamplingPercentageTelemetryProcessortracesSampleRate: 0.110% sampling = 0.1

Pino log levels:

logger.trace({ ... }, "msg"); // Level 10 — verbose debugging
logger.debug({ ... }, "msg"); // Level 20 — development debugging
logger.info({ ... }, "msg");  // Level 30 — normal operation
logger.warn({ ... }, "msg");  // Level 40 — recoverable issues
logger.error({ ... }, "msg"); // Level 50 — errors requiring attention
logger.fatal({ ... }, "msg"); // Level 60 — process exit imminent

Further Reading

4.10 — API Client Generation: NSwag vs. OpenAPI Codegen

For .NET engineers who know: NSwag, Swashbuckle, generating typed C# clients from Swagger/OpenAPI specs You’ll learn: The TypeScript equivalents of NSwag — openapi-typescript for types, orval for TanStack Query hooks, and tRPC as the option that eliminates codegen entirely — and when each approach is the right tool Time: 10-15 minutes


The .NET Way (What You Already Know)

NSwag is the standard tool for generating typed API clients in .NET. Given a running ASP.NET Core API or a swagger.json spec file, it generates complete C# client classes: typed request/response models, method signatures matching your controller routes, and HttpClient-based implementations.

# .NET — generate C# client from a running API
dotnet tool install -g NSwag.Tool
nswag openapi2csclient \
  /input:http://localhost:5000/swagger/v1/swagger.json \
  /output:OrdersApiClient.cs \
  /namespace:MyApp.ApiClients \
  /classname:OrdersApiClient

The output is a complete, typed client:

// C# — NSwag-generated client (excerpt)
public partial class OrdersApiClient
{
    private readonly HttpClient _httpClient;

    public OrdersApiClient(HttpClient httpClient)
    {
        _httpClient = httpClient;
    }

    /// <exception cref="ApiException">Thrown when the request fails.</exception>
    public async Task<OrderDto> GetOrderAsync(Guid id, CancellationToken cancellationToken = default)
    {
        var request = new HttpRequestMessage(HttpMethod.Get, $"/api/orders/{id}");
        var response = await _httpClient.SendAsync(request, cancellationToken);
        response.EnsureSuccessStatusCode();
        return JsonSerializer.Deserialize<OrderDto>(
            await response.Content.ReadAsStringAsync(),
            _jsonOptions);
    }

    public async Task<IReadOnlyList<OrderDto>> ListOrdersAsync(
        Guid? customerId = null,
        OrderStatus? status = null,
        CancellationToken cancellationToken = default)
    { /* ... */ }
}

You register the generated client in DI, inject it into your services, and you have compile-time safety that spans the client-server boundary. If the API changes the response shape or removes an endpoint, the generated code updates on the next codegen run and the type errors are compiler errors.

The TypeScript ecosystem achieves the same outcome, but with more tool choices and a different set of trade-offs.


The TypeScript Way

The Three Approaches

There is no single “TypeScript NSwag.” The ecosystem has three distinct tools that occupy different points on the trade-off spectrum:

openapi-typescript      orval                   tRPC
──────────────────      ─────────────────       ─────────────────────
Types only (no          Types + TanStack         No OpenAPI at all —
HTTP calls)             Query hooks + mutations  share types directly
                                                 via TypeScript modules

External APIs           Your own API             Your own API,
Remote specs            consumed in React        TS-only stack
Lowest codegen cost     Most complete            Highest setup cost,
                        for React projects       no external API support

openapi-typescript: Types Without the Client

openapi-typescript generates TypeScript types from an OpenAPI spec — it does not generate an HTTP client. The output is a single .d.ts file containing types for every path, method, request body, and response body in your spec.

pnpm add -D openapi-typescript
# Generate types from a local spec
npx openapi-typescript ./openapi.yaml -o src/api/schema.d.ts

# Generate from a running server
npx openapi-typescript http://localhost:3000/api-json -o src/api/schema.d.ts

The output looks like this (simplified):

// TypeScript — auto-generated schema.d.ts (excerpt)
export interface paths {
    "/orders/{id}": {
        get: {
            parameters: { path: { id: string } };
            responses: {
                200: { content: { "application/json": components["schemas"]["OrderDto"] } };
                404: { content: { "application/json": components["schemas"]["ProblemDetails"] } };
            };
        };
    };
    "/orders": {
        post: {
            requestBody: {
                content: { "application/json": components["schemas"]["PlaceOrderRequest"] };
            };
            responses: {
                201: { content: { "application/json": components["schemas"]["OrderDto"] } };
            };
        };
    };
}

export interface components {
    schemas: {
        OrderDto: {
            id: string;
            customerId: string;
            status: "pending" | "confirmed" | "shipped" | "cancelled";
            total: number;
            items: components["schemas"]["OrderItemDto"][];
        };
        // ...
    };
}

You then use these types with the openapi-fetch companion library, which is a typed wrapper around the Fetch API:

pnpm add openapi-fetch
// TypeScript — typed API client using openapi-fetch + generated schema
import createClient from "openapi-fetch";
import type { paths } from "./api/schema";

// Create a typed client instance
const apiClient = createClient<paths>({
    baseUrl: process.env.NEXT_PUBLIC_API_URL,
    headers: {
        "Content-Type": "application/json",
    },
});

// TypeScript knows the request and response shapes for every endpoint
const { data, error } = await apiClient.GET("/orders/{id}", {
    params: { path: { id: "ord_abc123" } },
});

// data is typed as OrderDto | undefined
// error is typed as the 404 response body | undefined

const { data: newOrder, error: createError } = await apiClient.POST("/orders", {
    body: {
        customerId: "usr_xyz789",
        items: [{ productId: "prod_123", quantity: 2 }],
    },
});

The key difference from NSwag: openapi-fetch does not generate code — it is a generic typed wrapper that uses the generated schema types at compile time. There is no generated client file to check in. This avoids the common problem of generated files creating massive diffs in pull requests.

orval: Types + TanStack Query Hooks

orval goes further than openapi-typescript. It generates:

  • TypeScript types (same as openapi-typescript)
  • TanStack Query useQuery hooks for GET endpoints
  • TanStack Query useMutation hooks for POST/PUT/PATCH/DELETE endpoints
  • Zod schemas for runtime validation (optional)
  • MSW (Mock Service Worker) mocks for testing (optional)

If your frontend uses TanStack Query (Article 5.3), orval eliminates the boilerplate of writing custom hooks around every API call.

pnpm add -D orval
// orval.config.ts — configuration file
import { defineConfig } from "orval";

export default defineConfig({
    ordersApi: {
        input: {
            target: "http://localhost:3000/api-json",
            // Or a local file:
            // target: "./openapi.yaml",
        },
        output: {
            target: "./src/api/orders.generated.ts",
            schemas: "./src/api/model",
            client: "react-query",
            override: {
                mutator: {
                    // Custom axios instance or fetch wrapper with auth headers
                    path: "./src/lib/api-client.ts",
                    name: "apiClient",
                },
            },
        },
        hooks: {
            afterAllFilesWrite: "prettier --write",
        },
    },
});
# Generate — run this after any API change
npx orval

The generated output for a GET endpoint:

// TypeScript — orval-generated hook (what you would write manually without it)
export const useGetOrder = (
    id: string,
    options?: UseQueryOptions<OrderDto, ApiError>,
) => {
    return useQuery<OrderDto, ApiError>({
        queryKey: getGetOrderQueryKey(id),
        queryFn: () => apiClient<OrderDto>({ url: `/orders/${id}`, method: "GET" }),
        ...options,
    });
};

export const getGetOrderQueryKey = (id: string) => [`/orders/${id}`] as const;

// Usage in a React component — no boilerplate to write
function OrderDetail({ id }: { id: string }) {
    const { data: order, isPending, error } = useGetOrder(id);

    if (isPending) return <Skeleton />;
    if (error) return <ErrorMessage error={error} />;
    return <OrderCard order={order} />;
}

For mutations:

// TypeScript — orval-generated mutation hook
export const usePlaceOrder = (
    options?: UseMutationOptions<OrderDto, ApiError, PlaceOrderRequest>,
) => {
    return useMutation<OrderDto, ApiError, PlaceOrderRequest>({
        mutationFn: (body) => apiClient<OrderDto>({ url: "/orders", method: "POST", data: body }),
        ...options,
    });
};

// Usage
function PlaceOrderForm() {
    const placeOrder = usePlaceOrder({
        onSuccess: (order) => router.push(`/orders/${order.id}`),
        onError: (err) => toast.error(err.message),
    });

    return (
        <form onSubmit={(e) => {
            e.preventDefault();
            placeOrder.mutate({ customerId, items });
        }}>
            {/* ... */}
        </form>
    );
}

tRPC: Eliminating Codegen Entirely

tRPC is a different approach. Rather than generating types from an OpenAPI spec, it shares TypeScript types directly between the server and client as a TypeScript module. There is no code generation, no spec file, no HTTP client. The client calls the server as if it were calling a local TypeScript function.

// TypeScript — tRPC server definition
// src/server/routers/orders.ts
import { z } from "zod";
import { router, protectedProcedure } from "../trpc";

export const ordersRouter = router({
    getById: protectedProcedure
        .input(z.object({ id: z.string() }))
        .query(async ({ input, ctx }) => {
            const order = await ctx.db.order.findUniqueOrThrow({
                where: { id: input.id },
            });
            return order;
        }),

    place: protectedProcedure
        .input(z.object({
            customerId: z.string(),
            items: z.array(z.object({ productId: z.string(), quantity: z.number().positive() })),
        }))
        .mutation(async ({ input, ctx }) => {
            return ctx.orderService.placeOrder(input);
        }),
});
// TypeScript — tRPC client (React) — typed automatically, no codegen
import { trpc } from "~/lib/trpc";

function OrderDetail({ id }: { id: string }) {
    // This is fully typed — including the input and output shapes
    // The types come from the server router definition directly
    const { data: order, isPending } = trpc.orders.getById.useQuery({ id });

    // ...
}

function PlaceOrderForm() {
    const placeOrder = trpc.orders.place.useMutation({
        onSuccess: (order) => router.push(`/orders/${order.id}`),
    });

    // ...
}

tRPC’s advantage: zero codegen, zero spec drift, zero intermediate files. The TypeScript compiler catches type errors at the call site immediately when the server procedure changes. This is the closest thing to .NET’s in-process service-to-service calls where you share types by reference.

The limitation — and it is significant — is that tRPC only works when both the client and server are TypeScript. You cannot call a tRPC endpoint from a .NET service, a Python script, or a mobile app. For any cross-language communication, you need OpenAPI.


Key Differences

ConceptNSwag (.NET)openapi-typescriptorvaltRPC
InputOpenAPI/Swagger specOpenAPI specOpenAPI specTypeScript router definition
OutputC# client class + modelsTypeScript .d.ts typesTS types + Query hooksNone — types shared directly
HTTP client generatedYes — full HttpClient wrapperNo — use openapi-fetchYes — configurable (axios, fetch)N/A — uses its own transport
React hooks generatedNoNoYes — TanStack QueryYes — built-in .useQuery()
Runtime validationNo (types only)NoOptional Zod schemasYes — Zod schemas on every procedure
Works with external APIsYesYesYesNo — both sides must be TypeScript
CI codegen requiredYesYesYesNo
Generated files in gitYes (large diffs)No (schema file only)YesNo
Spec drift riskYes — if codegen not runYesYesN/A

Gotchas for .NET Engineers

Gotcha 1: Generated Files Should Not Be Committed — Or Should Always Be Committed

There is a split in the community on whether generated files belong in version control. The NSwag convention in .NET is to check them in and treat them as source files — they show up in diffs, which makes API changes visible during code review.

The TypeScript community leans the other way: run codegen as a CI step, do not commit the output. This keeps diffs clean but means your CI pipeline must regenerate types on every build. If the API server is not running during CI, you need to commit the spec file (openapi.yaml or swagger.json) to the repository and generate from that.

Pick one approach and document it. The worst outcome is some developers running codegen locally and committing stale generated files while CI regenerates fresh ones — your generated files will drift from each other and produce confusing merge conflicts.

The recommended approach for openapi-typescript: commit the openapi.yaml spec file. Run codegen from the spec in CI and during local development. Do not commit schema.d.ts.

For orval: the generated hooks are more substantial and tend to get committed. Configure afterAllFilesWrite: "prettier --write" so the format is consistent.

Gotcha 2: tRPC Is Not a REST API — You Cannot Consume It from Other Services

A tRPC endpoint is not HTTP REST. It uses HTTP as transport but the wire format and routing conventions are tRPC-specific. You cannot call a tRPC endpoint with curl, from a .NET service, or from a mobile app using a REST client. The endpoint URLs look like /api/trpc/orders.getById — not /api/orders/{id}.

If you start a project with tRPC and later need to expose an endpoint to a third party, a mobile app, or a partner service, you will need to either add a separate REST layer alongside tRPC or migrate to an OpenAPI-based approach.

The safe heuristic: use tRPC for the internal surface between your Next.js frontend and your NestJS backend when both are TypeScript and you control both ends. Use OpenAPI for anything that crosses a language or team boundary. Article 4B.4 covers the cross-language OpenAPI patterns in more depth.

Gotcha 3: orval Mutations Do Not Automatically Invalidate Queries — You Have To Configure This

NSwag-generated clients are just HTTP wrappers — they know nothing about caching. orval generates TanStack Query hooks, and TanStack Query does cache client-side state. When you call usePlaceOrder().mutate(), TanStack Query does not automatically invalidate the useListOrders query. You have to configure this explicitly.

This is not an orval bug — it is a TanStack Query design decision. But .NET engineers who expect “I sent a POST, the GET should now return updated data” are surprised when stale data remains in the cache.

// TypeScript — configure query invalidation after mutation
// This is manual in TanStack Query, not automatic
const queryClient = useQueryClient();

const placeOrder = usePlaceOrder({
    onSuccess: () => {
        // Invalidate the list query so it refetches with the new order
        queryClient.invalidateQueries({ queryKey: ["/orders"] });
    },
});

orval can generate this boilerplate if you configure it with mutator options that include the queryClient, but you must understand what it is doing and why. See Article 5.3 for TanStack Query cache invalidation patterns.

Gotcha 4: OpenAPI Spec Drift Is Your Biggest Long-Term Risk

NSwag has the same problem — if you forget to regenerate after an API change, your client types are wrong and the compiler cannot catch it. In .NET this is somewhat mitigated by keeping the API and its consumers in the same solution, so the spec always reflects the code.

In a separate frontend/backend repository setup (common in this stack), spec drift is a real production risk. Automate it:

# .github/workflows/codegen.yml — run on every push to main
name: Regenerate API Types

on:
  push:
    branches: [main]
  pull_request:
    paths:
      - "apps/api/src/**"
      - "openapi.yaml"

jobs:
  codegen:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Node
        uses: actions/setup-node@v4
        with: { node-version: 22 }

      - run: pnpm install

      - name: Regenerate types
        run: npx openapi-typescript ./openapi.yaml -o apps/web/src/api/schema.d.ts

      - name: Check for drift
        run: git diff --exit-code apps/web/src/api/schema.d.ts
        # Fails if generated types changed without a codegen run — CI catches drift

Alternatively: generate the spec from the running API in CI, run codegen, and commit the result. If the spec changed but the generated file was not committed, the diff check fails and the PR cannot merge.


Hands-On Exercise

This exercise sets up automated OpenAPI type generation for a NestJS + Next.js project and compares the experience with a tRPC approach.

Prerequisites: A NestJS API with Swagger configured (@nestjs/swagger) and a Next.js frontend in the same monorepo.

Part A — openapi-typescript setup

Step 1 — Export the spec from NestJS

NestJS with @nestjs/swagger exposes the spec at /api-json (JSON) or /api-yaml (YAML). Add a script to your package.json that fetches it and saves it to the repo:

{
    "scripts": {
        "spec:export": "curl http://localhost:3000/api-json -o openapi.json"
    }
}

Step 2 — Generate types

npx openapi-typescript ./openapi.json -o apps/web/src/api/schema.d.ts

Step 3 — Create the typed client

// apps/web/src/lib/api.ts
import createClient from "openapi-fetch";
import type { paths } from "../api/schema";

export const api = createClient<paths>({
    baseUrl: process.env.NEXT_PUBLIC_API_URL ?? "http://localhost:3000",
});

Step 4 — Use the typed client in a page

Replace any untyped fetch() calls in one page with api.GET() or api.POST() calls. Verify that TypeScript catches a wrong parameter type or an incorrect response property access.

Step 5 — Add a CI check

Add a GitHub Actions workflow step that regenerates the types from the committed openapi.json and fails if the output differs from what is committed.

Part B — tRPC comparison (stretch goal)

If your project is a TypeScript-to-TypeScript stack with no external API consumers planned:

Step 6 — Install tRPC

pnpm add @trpc/server @trpc/client @trpc/react-query @tanstack/react-query

Step 7 — Define one procedure

Migrate one endpoint (for example, GET /orders/:id) from an OpenAPI route to a tRPC query procedure. Keep the original REST endpoint alongside it.

Step 8 — Compare the developer experience

Make a change to the return type of the tRPC procedure (add a field). Observe how TypeScript propagates the error to the call site immediately — no codegen required. Then make the equivalent change to the OpenAPI spec and observe that you must run codegen to see the error in the frontend.

Document your findings: which felt more productive for this workflow? What would make you choose one over the other?


Quick Reference

.NET / NSwag ConceptTypeScript EquivalentTool
nswag openapi2csclientopenapi-typescript ./spec.yaml -o schema.d.tsopenapi-typescript
Generated OrdersApiClient classcreateClient<paths>(config)openapi-fetch
Generated DTO classesGenerated paths and components interface typesopenapi-typescript
client.GetOrderAsync(id)api.GET("/orders/{id}", { params: { path: { id } } })openapi-fetch
client.PlaceOrderAsync(request)api.POST("/orders", { body: request })openapi-fetch
No equivalentuseGetOrder(id) — generated TanStack Query hookorval
No equivalentusePlaceOrder() — generated mutation hookorval
Shared types via assembly referencetRPC router type inference — no codegentRPC
nswag.json config fileorval.config.tsorval
Run codegen in CInpx openapi-typescript or npx orval in workflowCI script
Check for spec driftgit diff --exit-code generated-file.ts after codegenCI script
Swashbuckle for spec generation@nestjs/swagger + SwaggerModule.setup()NestJS
/swagger/v1/swagger.json/api-json (NestJS default)NestJS Swagger

When to use which tool:

ScenarioRecommended Tool
Consuming an external API (Stripe, GitHub, etc.)openapi-typescript + openapi-fetch
Your own API, React frontend, TanStack Queryorval
Your own API, TS-only stack, no external consumerstRPC
Your own API with mobile or non-TS consumersopenapi-typescript + openapi-fetch or orval
Cross-language API contracts (Article 4B.4)openapi-typescript from shared spec

Further Reading

Keeping .NET as Your API: Next.js/Nuxt as a Typed Frontend for ASP.NET Core

For .NET engineers who know: ASP.NET Core Web APIs, Swagger/Swashbuckle, Entity Framework Core, and JWT authentication You’ll learn: How to connect a Next.js or Nuxt frontend to an existing ASP.NET Core API with full end-to-end type safety — and when this architecture is the right call Time: 25-30 min read


The .NET Way (What You Already Know)

You have a working ASP.NET Core Web API. It has battle-tested controllers, complex EF Core queries tuned over months, middleware that handles multi-tenant auth, and business logic that would take a year to replicate. Your team knows it cold. The Swagger UI is the de facto contract with every consumer.

This is not a greenfield situation. This is most production systems.

The instinct when you start learning the modern JS stack is to assume you need to throw away the backend and rebuild in NestJS or tRPC. That instinct is wrong in a significant number of real-world cases. Understanding when to keep your .NET backend — and how to wire it properly to a TypeScript frontend — is one of the most practically valuable skills in this curriculum.

Here is a mature ASP.NET Core endpoint that will serve as the running example throughout this article:

// ProductsController.cs
[ApiController]
[Route("api/[controller]")]
[Authorize]
public class ProductsController : ControllerBase
{
    private readonly AppDbContext _db;

    public ProductsController(AppDbContext db) => _db = db;

    [HttpGet]
    [ProducesResponseType(typeof(PagedResult<ProductDto>), 200)]
    public async Task<IActionResult> GetProducts(
        [FromQuery] ProductFilterDto filter,
        CancellationToken ct)
    {
        var query = _db.Products
            .Where(p => !p.IsDeleted)
            .Where(p => filter.CategoryId == null || p.CategoryId == filter.CategoryId)
            .Select(p => new ProductDto
            {
                Id = p.Id,
                Name = p.Name,
                Price = p.Price,
                StockCount = p.StockCount,
                CreatedAt = p.CreatedAt,
                Category = new CategoryDto { Id = p.Category.Id, Name = p.Category.Name }
            });

        var total = await query.CountAsync(ct);
        var items = await query
            .OrderBy(p => p.Name)
            .Skip((filter.Page - 1) * filter.PageSize)
            .Take(filter.PageSize)
            .ToListAsync(ct);

        return Ok(new PagedResult<ProductDto>
        {
            Items = items,
            Total = total,
            Page = filter.Page,
            PageSize = filter.PageSize
        });
    }

    [HttpPost("{id}/purchase")]
    [ProducesResponseType(typeof(PurchaseResultDto), 200)]
    [ProducesResponseType(typeof(ProblemDetails), 400)]
    public async Task<IActionResult> Purchase(
        int id,
        [FromBody] PurchaseRequestDto request,
        CancellationToken ct)
    {
        // Complex business logic, inventory checks, payment processing
        // — code that took months to get right
    }
}

That [ProducesResponseType] decoration is the bridge to the TypeScript world. Every annotated endpoint feeds Swashbuckle, which generates an OpenAPI specification, which a code generator transforms into TypeScript types. This is the chain you are about to build.


The Architecture

graph TD
    Browser["Browser"]

    subgraph FE["Next.js / Nuxt (Render / Vercel / Azure SWA)"]
        SC["Server Components / Pages (RSC)"]
        CC["Client Components (TanStack Query)"]
        SC --- CC
    end

    subgraph API["ASP.NET Core Web API (Azure App Service / Render)"]
        Controllers["Controllers (REST/gRPC)"]
        Middleware["Middleware Auth/Tenant"]
        BGServices["Background Services"]
        EFCore["EF Core + Migrations"]
        DomainSvc["Domain Services"]
        Controllers --> EFCore
        Controllers --- Middleware
        Controllers --- BGServices
        Controllers --- DomainSvc
    end

    DB["SQL Server / PostgreSQL"]

    Browser -->|HTTPS| FE
    FE -->|"HTTPS + JWT / Cookie\nGenerated TS types + Zod validation"| API
    EFCore --> DB

The key observation: Next.js and Nuxt sit in front, handling rendering, routing, and auth session management. The ASP.NET Core API handles data, business logic, and everything it already does well. The contract between them is the OpenAPI specification.


Why Keep .NET? A Practical Decision Framework

Before building, understand when this architecture is the right call versus when you should consolidate to a full TypeScript stack.

Keep .NET when:

  • You have existing EF Core queries with years of performance tuning (complex joins, compiled queries, raw SQL fallbacks). Rewriting these in Prisma or Drizzle is expensive and the behavior may not be identical.
  • Your API serves multiple consumers — a mobile app, partner integrations, internal tooling — and is not exclusively the BFF for one frontend.
  • You run CPU-intensive workloads. The CLR’s multi-threading model and JIT compiler genuinely outperform Node.js for compute-heavy tasks. Node.js is single-threaded and non-blocking I/O is its strength, not parallel CPU computation.
  • You have enterprise integrations: Active Directory, MSMQ, COM interop, SOAP services, or Windows-specific APIs. Node.js does not have mature equivalents for these.
  • Your team’s .NET expertise is deep. Rewriting in a new language/runtime while simultaneously shipping features is a reliable way to introduce bugs.
  • You have compliance requirements (SOC 2, HIPAA) already satisfied by your .NET infrastructure.
  • You are using gRPC for high-performance inter-service communication. gRPC-Web lets a TypeScript frontend consume gRPC directly, with Protobuf providing the type source of truth for both C# and TypeScript.

Move toward a TypeScript API when:

  • You are building a new product with a small team and want one language end-to-end.
  • Your API exists solely as a BFF for one Next.js/Nuxt application with no other consumers.
  • You want tRPC’s zero-boilerplate type sharing (tRPC requires both ends to be TypeScript — it cannot work with a .NET backend).
  • Your team’s .NET skills are shallow and your TypeScript skills are stronger.

This article is for the former scenario. You have a good .NET API. Now you need a modern frontend for it.


The New Stack Way

Step 1: Prepare the .NET API for Frontend Consumption

Before touching the frontend, make sure the .NET side is properly configured for a separate client origin.

CORS — get this right from the start:

// Program.cs
var allowedOrigins = builder.Configuration
    .GetSection("Cors:AllowedOrigins")
    .Get<string[]>() ?? [];

builder.Services.AddCors(options =>
{
    options.AddPolicy("Frontend", policy =>
    {
        policy
            .WithOrigins(allowedOrigins)   // Never use AllowAnyOrigin with credentials
            .AllowAnyHeader()
            .AllowAnyMethod()
            .AllowCredentials();           // Required if you send cookies
    });
});

// Must come before UseAuthorization
app.UseCors("Frontend");

In appsettings.Development.json:

{
  "Cors": {
    "AllowedOrigins": ["http://localhost:3000"]
  }
}

In production (environment variable):

Cors__AllowedOrigins__0=https://yourapp.vercel.app
Cors__AllowedOrigins__1=https://yourapp.com

Swagger/OpenAPI — make the spec machine-readable:

builder.Services.AddSwaggerGen(c =>
{
    c.SwaggerDoc("v1", new OpenApiInfo { Title = "Products API", Version = "v1" });

    // Include XML comments for richer type documentation
    var xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
    var xmlPath = Path.Combine(AppContext.BaseDirectory, xmlFile);
    c.IncludeXmlComments(xmlPath);

    // JWT bearer auth in Swagger UI (for manual testing)
    c.AddSecurityDefinition("Bearer", new OpenApiSecurityScheme
    {
        Type = SecuritySchemeType.Http,
        Scheme = "bearer",
        BearerFormat = "JWT"
    });
    c.AddSecurityRequirement(new OpenApiSecurityRequirement
    {
        {
            new OpenApiSecurityScheme
            {
                Reference = new OpenApiReference
                {
                    Type = ReferenceType.SecurityScheme,
                    Id = "Bearer"
                }
            },
            []
        }
    });

    // Serialize enums as strings — critical for TypeScript interop (covered in Gotchas)
    c.UseInlineDefinitionsForEnums();
});

// Expose the spec at /swagger/v1/swagger.json
app.UseSwagger();

Add to your .csproj:

<PropertyGroup>
  <GenerateDocumentationFile>true</GenerateDocumentationFile>
  <NoWarn>$(NoWarn);1591</NoWarn>
</PropertyGroup>

Enum serialization — configure globally:

builder.Services.AddControllers()
    .AddJsonOptions(options =>
    {
        // Serialize enums as strings ("Active") not integers (1)
        options.JsonSerializerOptions.Converters.Add(
            new JsonStringEnumConverter());

        // Serialize property names as camelCase
        options.JsonSerializerOptions.PropertyNamingPolicy =
            JsonNamingPolicy.CamelCase;
    });

This is not optional. TypeScript unions and enums both expect string values. An API returning 1 where the TypeScript type expects "Active" is a silent runtime failure — the field will not match any union arm and you will get undefined behavior.


Step 2: Generate TypeScript Types from OpenAPI

This is the heart of the type-safety story. You do not write TypeScript interfaces by hand. You generate them from the same Swashbuckle spec that documents your API.

The recommended tool is openapi-typescript. It is fast, produces clean output, and has no runtime dependency — it runs at build/CI time only.

Install:

npm install -D openapi-typescript

Configure in package.json:

{
  "scripts": {
    "generate-api": "openapi-typescript http://localhost:5000/swagger/v1/swagger.json -o src/lib/api-types.gen.ts",
    "generate-api:prod": "openapi-typescript $API_URL/swagger/v1/swagger.json -o src/lib/api-types.gen.ts"
  }
}

What the generated output looks like:

Given the ProductDto from the controller:

public class ProductDto
{
    public int Id { get; set; }
    public string Name { get; set; } = string.Empty;
    public decimal Price { get; set; }
    public int StockCount { get; set; }
    public DateTime CreatedAt { get; set; }
    public CategoryDto Category { get; set; } = new();
}

The generator produces:

// src/lib/api-types.gen.ts — DO NOT EDIT BY HAND
export interface components {
  schemas: {
    ProductDto: {
      id: number;
      name: string;
      price: number;
      stockCount: number;
      createdAt: string;        // <-- DateTime becomes string (ISO 8601)
      category: components["schemas"]["CategoryDto"];
    };
    CategoryDto: {
      id: number;
      name: string;
    };
    PagedResultProductDto: {
      items: components["schemas"]["ProductDto"][];
      total: number;
      page: number;
      pageSize: number;
    };
  };
}

// Convenience type aliases
export type ProductDto = components["schemas"]["ProductDto"];
export type CategoryDto = components["schemas"]["CategoryDto"];
export type PagedResultProductDto = components["schemas"]["PagedResultProductDto"];

Note createdAt: string. This is correct and expected. The JSON wire format carries ISO 8601 strings. You must parse them into Date objects explicitly. This is covered in detail in the Gotchas section.

For heavier use cases — orval:

If you need typed fetch functions, TanStack Query hooks, or mock service worker handlers generated automatically, use orval instead of openapi-typescript. It wraps the type generation and produces ready-to-use hooks:

npm install -D orval

orval.config.ts:

import { defineConfig } from "orval";

export default defineConfig({
  productsApi: {
    input: {
      target: "http://localhost:5000/swagger/v1/swagger.json",
    },
    output: {
      target: "src/lib/api-client.gen.ts",
      client: "react-query",        // generates TanStack Query hooks
      httpClient: "fetch",
      override: {
        mutator: {
          path: "src/lib/custom-fetch.ts",
          name: "customFetch",      // your authenticated fetch wrapper
        },
      },
    },
  },
});

This generates hooks like:

// Generated — do not edit
export const useGetProducts = (
  params: GetProductsParams,
  options?: UseQueryOptions<PagedResultProductDto>
) => {
  return useQuery({
    queryKey: ["products", params],
    queryFn: () => customFetch<PagedResultProductDto>(`/api/products`, { params }),
    ...options,
  });
};

Step 3: Zod Validation at the API Boundary

Generated types tell TypeScript what to expect. Zod tells you at runtime whether the actual response matches. These are complementary, not redundant.

The type generator trusts the OpenAPI spec. If your .NET controller returns a field the spec does not declare, TypeScript will not know about it and will not complain. If your .NET API has a bug and returns null for a field declared non-nullable, TypeScript’s type-level guarantees are violated silently.

Zod closes this gap:

// src/lib/schemas/product.schema.ts
import { z } from "zod";

export const CategorySchema = z.object({
  id: z.number().int().positive(),
  name: z.string().min(1),
});

export const ProductSchema = z.object({
  id: z.number().int().positive(),
  name: z.string().min(1),
  price: z.number().nonnegative(),
  stockCount: z.number().int().nonnegative(),
  // Parse ISO 8601 string -> Date object at the boundary
  createdAt: z.string().datetime().transform((val) => new Date(val)),
  category: CategorySchema,
});

export const PagedResultProductSchema = z.object({
  items: z.array(ProductSchema),
  total: z.number().int().nonnegative(),
  page: z.number().int().positive(),
  pageSize: z.number().int().positive(),
});

// Infer TS types FROM the Zod schema — single source of truth
export type Product = z.infer<typeof ProductSchema>;
export type PagedResultProduct = z.infer<typeof PagedResultProductSchema>;

Integrate into your fetch layer:

// src/lib/api-client.ts
import { PagedResultProductSchema, type PagedResultProduct } from "./schemas/product.schema";

async function fetchProducts(params: GetProductsParams): Promise<PagedResultProduct> {
  const url = new URL(`${process.env.NEXT_PUBLIC_API_URL}/api/products`);
  Object.entries(params).forEach(([k, v]) => {
    if (v != null) url.searchParams.set(k, String(v));
  });

  const res = await fetch(url.toString(), {
    headers: { Authorization: `Bearer ${await getToken()}` },
    next: { tags: ["products"] }, // Next.js cache tag for on-demand revalidation
  });

  if (!res.ok) {
    throw new ApiError(res.status, await res.json());
  }

  const raw = await res.json();

  // safeParse gives you the error without throwing — parse gives you throw-on-failure
  const result = PagedResultProductSchema.safeParse(raw);

  if (!result.success) {
    // Log to your observability platform — this is a contract violation
    console.error("API contract violation:", result.error.format());
    // Re-throw or return a degraded result — your call
    throw new Error(`API response did not match expected schema`);
  }

  return result.data; // Fully typed, date fields are now Date objects
}

A note on parse vs. safeParse: Use safeParse in production and log failures to your observability platform (Sentry, Datadog) rather than throwing blindly. A runtime type mismatch between your .NET API and your frontend schema is important diagnostic information — treat it as such.


Step 4: TanStack Query for Data Fetching

TanStack Query (formerly React Query) is the standard for server state management in the React/Next.js ecosystem. Think of it as a combination of IMemoryCache, IHttpClientFactory, and a Blazor/SignalR reactive binding — all in one library.

// src/hooks/use-products.ts
import { useQuery, useMutation, useQueryClient } from "@tanstack/react-query";
import { fetchProducts, purchaseProduct } from "@/lib/api-client";
import type { GetProductsParams } from "@/lib/api-types.gen";

export function useProducts(params: GetProductsParams) {
  return useQuery({
    queryKey: ["products", params],   // Cache key — params changes = new fetch
    queryFn: () => fetchProducts(params),
    staleTime: 60 * 1000,             // Consider data fresh for 60 seconds
    gcTime: 5 * 60 * 1000,           // Keep in memory 5 min after last subscriber
  });
}

export function usePurchaseProduct() {
  const queryClient = useQueryClient();

  return useMutation({
    mutationFn: purchaseProduct,
    onSuccess: () => {
      // Invalidate the products cache — next render refetches from .NET API
      queryClient.invalidateQueries({ queryKey: ["products"] });
    },
    onError: (error: ApiError) => {
      // Handle 400 ProblemDetails from .NET
      console.error("Purchase failed:", error.detail);
    },
  });
}

Using in a component:

// src/components/ProductList.tsx
"use client";

import { useProducts } from "@/hooks/use-products";
import { usePurchaseProduct } from "@/hooks/use-products";

export function ProductList({ categoryId }: { categoryId?: number }) {
  const { data, isLoading, error } = useProducts({
    page: 1,
    pageSize: 20,
    categoryId,
  });

  const purchase = usePurchaseProduct();

  if (isLoading) return <ProductSkeleton />;
  if (error) return <ErrorDisplay error={error} />;

  return (
    <ul>
      {data.items.map((product) => (
        <li key={product.id}>
          <span>{product.name}</span>
          <span>${product.price.toFixed(2)}</span>
          {/* product.createdAt is a Date object here — Zod did the transform */}
          <span>Added {product.createdAt.toLocaleDateString()}</span>
          <button
            onClick={() => purchase.mutate({ productId: product.id, quantity: 1 })}
            disabled={purchase.isPending}
          >
            Purchase
          </button>
        </li>
      ))}
    </ul>
  );
}

Step 5: Authentication — Forwarding JWT to ASP.NET Core

This is where most engineers hit problems. The auth flow differs significantly depending on whether your Next.js components are Server Components (RSC) or Client Components.

This example uses Clerk as the auth provider, but the pattern applies equally to NextAuth.js or Auth0.

The auth problem: Your ASP.NET Core API expects a Bearer JWT in the Authorization header. In Server Components, you can get this token from the server-side session. In Client Components, the token is in the browser’s memory (or a cookie). You need both patterns.

Server Component (SSR data fetch):

// src/app/products/page.tsx — Server Component
import { auth } from "@clerk/nextjs/server";
import { fetchProductsServer } from "@/lib/api-client.server";

export default async function ProductsPage() {
  const { getToken } = await auth();

  // Get the JWT token on the server — never touches the browser
  const token = await getToken();

  // Fetch directly from .NET API — no round-trip through the client
  const products = await fetchProductsServer(token, {
    page: 1,
    pageSize: 20,
  });

  return <ProductList initialData={products} />;
}
// src/lib/api-client.server.ts — server-only fetch utilities
import "server-only"; // Prevents accidental import in client components

export async function fetchProductsServer(
  token: string | null,
  params: GetProductsParams
): Promise<PagedResultProduct> {
  const res = await fetch(
    `${process.env.API_URL}/api/products?${new URLSearchParams(params as Record<string, string>)}`,
    {
      headers: {
        Authorization: token ? `Bearer ${token}` : "",
        "Content-Type": "application/json",
      },
      next: { revalidate: 60, tags: ["products"] },
    }
  );

  if (!res.ok) throw new Error(`API error: ${res.status}`);
  return PagedResultProductSchema.parse(await res.json());
}

Client Component (interactive mutations):

// src/lib/api-client.ts — browser-side fetch utilities
import { useAuth } from "@clerk/nextjs";

// Hook that returns a typed fetch function with auth attached
export function useApiClient() {
  const { getToken } = useAuth();

  return {
    async fetchProducts(params: GetProductsParams): Promise<PagedResultProduct> {
      const token = await getToken();

      const res = await fetch(
        `${process.env.NEXT_PUBLIC_API_URL}/api/products?${new URLSearchParams(
          params as Record<string, string>
        )}`,
        {
          headers: { Authorization: `Bearer ${token}` },
        }
      );

      return PagedResultProductSchema.parse(await res.json());
    },
  };
}

ASP.NET Core JWT validation (accept Clerk-issued tokens):

// Program.cs
builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
    .AddJwtBearer(options =>
    {
        options.Authority = "https://clerk.your-domain.com"; // Clerk JWKS endpoint
        options.TokenValidationParameters = new TokenValidationParameters
        {
            ValidateIssuer = true,
            ValidateAudience = false,  // Clerk does not set aud by default
            ValidateLifetime = true,
        };
    });

Step 6: CI Pipeline for Type Generation

The type generation step must run in CI, not just locally. If a developer changes a DTO in the .NET project and does not regenerate types, the TypeScript build will fail — which is exactly what you want.

# .github/workflows/type-check.yml
name: Type Check

on:
  push:
    branches: [main, develop]
  pull_request:

jobs:
  generate-and-check:
    runs-on: ubuntu-latest

    services:
      dotnet-api:
        image: your-registry/your-api:latest
        ports:
          - 5000:80

    steps:
      - uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: "20"
          cache: "npm"

      - name: Install dependencies
        run: npm ci

      - name: Wait for API to be ready
        run: |
          timeout 30 bash -c 'until curl -sf http://localhost:5000/swagger/v1/swagger.json; do sleep 1; done'

      - name: Generate API types
        run: npm run generate-api:ci
        env:
          API_URL: http://localhost:5000

      - name: Check for type drift
        run: |
          git diff --exit-code src/lib/api-types.gen.ts || \
          (echo "API types are out of sync. Run npm run generate-api and commit." && exit 1)

      - name: TypeScript type check
        run: npx tsc --noEmit

      - name: Run tests
        run: npm test

Alternatively, for faster CI pipelines without running the .NET app, generate types from a committed openapi.json file in the repository and regenerate it as part of the .NET build:

// In your .NET project's test or a dedicated CLI tool
// Generate the spec during CI
app.MapSwagger();
// Use Swashbuckle CLI: dotnet tool install -g Swashbuckle.AspNetCore.Cli
// swashbuckle tofile --output openapi.json http://localhost:5000
# .github/workflows/generate-spec.yml — runs in the .NET CI
- name: Generate OpenAPI spec
  run: |
    dotnet tool install -g Swashbuckle.AspNetCore.Cli
    swashbuckle tofile --output openapi.json ${{ env.APP_STARTUP_ASSEMBLY }}
- name: Commit spec if changed
  run: |
    git diff --exit-code openapi.json || \
    (git add openapi.json && git commit -m "chore: regenerate openapi.json")

Step 7: gRPC-Web (The Alternative for High-Performance Scenarios)

If your .NET API uses gRPC, the Protobuf schema is the single source of truth for both C# and TypeScript types. No OpenAPI step needed.

// products.proto
syntax = "proto3";

package products.v1;

service ProductsService {
  rpc GetProducts (GetProductsRequest) returns (GetProductsResponse);
  rpc Purchase (PurchaseRequest) returns (PurchaseResponse);
}

message Product {
  int32 id = 1;
  string name = 2;
  double price = 3;
  int32 stock_count = 4;
  google.protobuf.Timestamp created_at = 5;
}

message GetProductsResponse {
  repeated Product items = 1;
  int32 total = 2;
}

Generate TypeScript client:

npm install -D @protobuf-ts/plugin
npx protoc --plugin=./node_modules/.bin/protoc-gen-ts \
  --ts_out=src/lib/proto \
  --ts_opt=long_type_string \
  products.proto

The Timestamp type resolves to a proper Date object in the TypeScript client — avoiding the DateTime string gotcha entirely.

gRPC-Web requires a proxy (Envoy or the Grpc.AspNetCore.GrpcWebProtocol middleware) to translate between browser HTTP/1.1 and gRPC’s HTTP/2. In practice, REST + OpenAPI is simpler for standard CRUD scenarios. Reserve gRPC for high-throughput scenarios where the binary protocol and bidirectional streaming justify the setup overhead.


Key Differences

ConcernASP.NET CoreNext.js consuming ASP.NET Core
Type sharingImplicit (same project/solution)Codegen from OpenAPI spec
API contractEnforced by compilerEnforced by type gen + Zod at runtime
Auth flowMiddleware/attributesServer: getToken() from server session; Client: hook
Date/timeDateTime, DateTimeOffsetstring in JSON, Date in TS after Zod parse
Enum valuesint by defaultMust configure JsonStringEnumConverter
CacheIMemoryCache, IDistributedCacheTanStack Query + Next.js fetch cache
Error handlingProblemDetails (RFC 7807)Parse ProblemDetails shape in error handler
Null handlingNullable reference typesundefined for absent fields, null for explicit null

Gotchas for .NET Engineers

Gotcha 1: DateTime Serialization — The Silent Data Corruption Bug

In .NET, DateTime serializes to ISO 8601 by default: "2026-01-15T09:30:00". Without a timezone suffix, the JSON deserializer on the TypeScript side assumes local time. With a suffix (Z or +00:00), it assumes UTC.

The problem is DateTime vs DateTimeOffset:

// This serializes as "2026-01-15T09:30:00" — no timezone info
public DateTime CreatedAt { get; set; }

// This serializes as "2026-01-15T09:30:00+00:00" — explicit UTC offset
public DateTimeOffset CreatedAt { get; set; }

When TypeScript does new Date("2026-01-15T09:30:00"), the result depends on the browser’s local timezone. A user in Tokyo sees a different time than a user in New York. This is a data display bug that is almost impossible to catch in development (everyone on the team is in the same timezone).

Fix:

  1. Use DateTimeOffset everywhere in your .NET DTOs, or configure DateTime to serialize as UTC:
options.JsonSerializerOptions.Converters.Add(
    new JsonConverterDateTimeAsUtc());  // Custom converter or use NodaTime serializers
  1. In your Zod schema, always parse the string and be explicit:
createdAt: z.string().datetime({ offset: true }).transform((val) => new Date(val)),

The offset: true flag requires an explicit offset in the ISO string — it will reject bare "2026-01-15T09:30:00" without Z or +HH:MM, surfacing the bug immediately.

  1. In CI, add a test that checks a known UTC timestamp round-trips correctly through your API.

Gotcha 2: Enum Integer vs. String — TypeScript Union Exhaustion Fails Silently

By default, System.Text.Json serializes enums as their integer values:

public enum OrderStatus { Pending = 0, Processing = 1, Shipped = 2, Delivered = 3 }
// Serializes as: { "status": 1 }

Your OpenAPI-generated TypeScript type will be:

// Without JsonStringEnumConverter
status: number;  // Loses all semantic information

// What you wanted
status: "Pending" | "Processing" | "Shipped" | "Delivered";

With integer enums, TypeScript cannot perform exhaustive switch checks, your UI cannot render human-readable labels without a separate mapping table, and adding a new enum value does not trigger a type error in the frontend.

Fix — apply globally:

builder.Services.AddControllers()
    .AddJsonOptions(options =>
    {
        options.JsonSerializerOptions.Converters.Add(new JsonStringEnumConverter());
    });

This is a breaking change if you have existing consumers that send integer values. Coordinate the change across all API consumers before deploying.

Gotcha 3: null vs. undefined — The Behavioral Gap

ASP.NET Core and System.Text.Json serialize missing optional values as null. TypeScript’s type system distinguishes between null (explicitly absent) and undefined (property does not exist), but JSON has no equivalent of undefined — it can only represent null or a missing key.

This creates a mapping problem:

// From the API, you receive:
{ "description": null }

// TypeScript's OpenAPI-generated type:
description: string | null;

// But in your component, you might write:
if (!product.description) { ... }     // Catches both null and ""
if (product.description == null) { } // Catches null but not undefined
if (product.description === undefined) { } // Never true — JSON always sends null

The practical impact: form state in React typically uses undefined for “user has not entered a value yet” and "" for “user cleared the field”. When you pre-populate a form from API data, null from the API becomes null in your form state, which React controlled inputs treat differently from undefined.

Fix:

In your Zod schema, normalize null to undefined for optional fields if your form layer expects it:

description: z.string().nullable().optional().transform((val) => val ?? undefined),

Or leave it as null and handle it consistently in your form state management. The key is picking one convention and enforcing it at the Zod boundary.

Gotcha 4: ProblemDetails Error Handling

ASP.NET Core returns errors as ProblemDetails (RFC 7807) when you use the built-in validation and [ApiController] attribute. The shape is:

{
  "type": "https://tools.ietf.org/html/rfc7807",
  "title": "One or more validation errors occurred.",
  "status": 400,
  "errors": {
    "Price": ["Price must be greater than 0"],
    "Name": ["Name is required"]
  }
}

TypeScript fetch does not throw on 4xx/5xx responses — it only throws on network failure. You must check res.ok explicitly. The typical .NET instinct is to wrap everything in a try/catch and expect HTTP errors to throw — they do not.

// Wrong — res is "successful" from fetch's perspective even for a 400
const res = await fetch("/api/products", { method: "POST", body: JSON.stringify(data) });
const json = await res.json(); // Contains ProblemDetails, not your ProductDto

// Correct
const res = await fetch("/api/products", { method: "POST", body: JSON.stringify(data) });

if (!res.ok) {
  const problem = await res.json() as ProblemDetails;
  // Map validation errors back to form fields
  throw new ApiError(res.status, problem);
}

const product = ProductSchema.parse(await res.json());

Define a typed ProblemDetails interface:

interface ProblemDetails {
  type?: string;
  title?: string;
  status?: number;
  detail?: string;
  errors?: Record<string, string[]>; // Validation errors
}

Gotcha 5: Next.js Server Component Fetch Caching and Stale Data

Next.js extends the browser fetch API with caching behavior that has no equivalent in .NET. By default in Next.js 14/15, fetch calls in Server Components are not cached unless you explicitly opt in. This differs from earlier Next.js versions and confuses engineers coming from any background.

// No caching — fetches on every request (default in Next.js 14+)
fetch("/api/products")

// Cache for 60 seconds
fetch("/api/products", { next: { revalidate: 60 } })

// Cache indefinitely, invalidate via tag
fetch("/api/products", { next: { tags: ["products"] } })

// On a form submit, invalidate the tag:
import { revalidateTag } from "next/cache";
revalidateTag("products"); // Next.js re-fetches on next request

If your Server Components show stale data after a mutation, check whether your fetch calls have appropriate revalidate or tags configuration, and whether your mutation handlers call revalidateTag.


Hands-On Exercise

Goal: Connect a Next.js 14 app to a running ASP.NET Core API with full type safety.

Prerequisites:

  • ASP.NET Core 8+ API running locally on port 5000 with Swashbuckle configured
  • Next.js 14 app (create with npx create-next-app@latest --typescript)
  • Node.js 20+

Step 1 — Verify the OpenAPI spec is accessible:

curl http://localhost:5000/swagger/v1/swagger.json | jq '.paths | keys[]'

You should see your endpoint paths listed.

Step 2 — Generate TypeScript types:

npm install -D openapi-typescript
npx openapi-typescript http://localhost:5000/swagger/v1/swagger.json \
  --output src/lib/api-types.gen.ts

Open src/lib/api-types.gen.ts and examine the generated interfaces. Find a DateTime property. Note that it is typed as string. Find any enum properties — note they are string union types (if you configured JsonStringEnumConverter) or number (if you did not).

Step 3 — Write a Zod schema for one endpoint’s response:

Pick the simplest GET endpoint in your API. Write a Zod schema for its response shape, using z.string().datetime() for date fields and .transform((v) => new Date(v)) to hydrate them.

Step 4 — Write a typed fetch function:

Write a function that fetches from your .NET endpoint, runs the Zod parse, and returns the typed result. Call it from a Server Component and render the data.

Step 5 — Add TanStack Query for an interactive mutation:

Install @tanstack/react-query. Wrap your app in QueryClientProvider. Write a useMutation hook for a POST endpoint. Wire it to a button click in a Client Component.

Step 6 — Simulate a contract violation:

Temporarily change one of your .NET DTO properties (rename it or change its type). Run npm run generate-api — observe the generated types change. Note where TypeScript now shows errors. Revert the .NET change and observe that npm run generate-api restores the types.

This exercise demonstrates the feedback loop: the type generator is the early warning system for cross-language contract drift.


Quick Reference

OpenAPI type generation (one-time setup)
  npm install -D openapi-typescript
  npx openapi-typescript <url>/swagger.json -o src/lib/api-types.gen.ts

For hooks generation: use orval instead
  npm install -D orval
  npx orval  (reads orval.config.ts)

Zod date/time pattern
  z.string().datetime({ offset: true }).transform((v) => new Date(v))

Zod nullable-to-optional normalization
  z.string().nullable().optional().transform((v) => v ?? undefined)

Fetch error pattern (fetch does NOT throw on 4xx/5xx)
  if (!res.ok) { const err = await res.json(); throw new ApiError(res.status, err); }

Next.js fetch cache tags
  fetch(url, { next: { tags: ["products"] } })
  revalidateTag("products")  // In Server Action after mutation

Server-only imports (prevent client bundle leaks)
  import "server-only"  // At top of any file with server secrets

CORS — allow credentials requires explicit origin list
  policy.WithOrigins(allowedOrigins).AllowCredentials()
  // Never: AllowAnyOrigin() + AllowCredentials() — will throw

Enum serialization — must add globally to .NET
  options.JsonSerializerOptions.Converters.Add(new JsonStringEnumConverter())

DateTime — use DateTimeOffset in DTOs, not DateTime
  public DateTimeOffset CreatedAt { get; set; }

ProblemDetails shape (ASP.NET Core validation errors)
  { status: number, title: string, errors: Record<string, string[]> }

gRPC-Web type generation
  npm install -D @protobuf-ts/plugin
  npx protoc --plugin=protoc-gen-ts --ts_out=src/lib/proto *.proto

Further Reading

Python as the Middle Tier: AI/ML/NLP Backends with a TypeScript Frontend

For .NET engineers who know: ASP.NET Core Minimal APIs, async/await, dependency injection, and strongly typed API contracts You’ll learn: When Python is the pragmatic backend choice for AI/ML workloads, how to build a type-safe bridge between FastAPI and a TypeScript frontend, and how to stream LLM responses in real time Time: 25-30 min read


The .NET Way (What You Already Know)

When you build a standard backend in .NET, the full stack is coherent: C# types define the domain model, EF Core maps them to the database, the ASP.NET Core pipeline handles auth and middleware, and Swashbuckle generates an OpenAPI spec. The compiler enforces the contract between every layer.

For standard CRUD and business logic, this is an excellent setup. But when your product requires machine learning inference, LLM orchestration, or NLP pipelines, you run into a wall that no amount of C# skill resolves: the AI/ML ecosystem is in Python, and it is not moving.

PyTorch, TensorFlow, Hugging Face Transformers, LangChain, LlamaIndex, scikit-learn, spaCy, NumPy, pandas, and the entire vector search ecosystem (Pinecone, Weaviate, pgvector clients) have their canonical implementations in Python. The .NET equivalents are either thin wrappers, significantly behind the Python versions, or simply absent.

When your product needs ML inference, you do not write a ONNX wrapper in C# to avoid learning Python. You pick up FastAPI, which is — as you will see — closer to ASP.NET Core Minimal APIs than it is to anything alien, and you build a type-safe bridge between it and your TypeScript frontend.


The Architecture

graph TD
    Browser["Browser"]

    subgraph FE["Next.js / Nuxt (Vercel / Render)"]
        SC["Server Components\n(static data, SEO content)"]
        CC["Client Components\n(streaming chat, interactive UI)"]
    end

    subgraph PY["FastAPI (Python)\nAI / ML / NLP endpoints"]
        HF["Hugging Face Models"]
        LC["LangChain / RAG"]
        VS["Vector Search"]
        NLP["spaCy / NLTK"]
    end

    subgraph TS["NestJS or ASP.NET Core\nStandard CRUD endpoints"]
        EF["EF Core / Prisma"]
        BL["Business Logic\nAuth / Billing"]
    end

    subgraph DATA["Data Layer"]
        PG["PostgreSQL + pgvector"]
        PIN["Pinecone / Weaviate"]
        HFH["Hugging Face Hub"]
        OAI["OpenAI / Anthropic APIs"]
    end

    Browser -->|HTTPS| FE
    SC -->|"Generated TS types\nZod validation"| PY
    CC -->|"SSE / fetch streaming"| PY
    SC --> TS
    CC --> TS
    PY --> PG
    PY --> PIN
    PY --> HFH
    PY --> OAI
    TS --> PG

The frontend speaks to two backends:

  • FastAPI handles everything AI-related: inference, embeddings, vector search, LLM orchestration, streaming responses.
  • NestJS or ASP.NET Core handles standard CRUD: users, billing, settings, content management — anything that fits a relational model and does not need ML.

Both backends expose OpenAPI specifications. Your TypeScript frontend generates types from both and talks to each directly.


Why Python? The Honest Technical Case

A .NET engineer deserves a straightforward answer, not marketing.

Python IS the right choice for:

  • ML model inference: PyTorch and TensorFlow are written in C++, with Python bindings as the primary interface. Hugging Face’s transformers library has thousands of pretrained models with three-line inference code. The ONNX Runtime has a .NET SDK, but the Hugging Face model hub is Python-native — the gap in available models is enormous.

  • LLM orchestration: LangChain, LlamaIndex, and DSPy are Python-first. They have Node.js ports, but those ports lag behind the Python versions by months and lack many features. If you are building RAG pipelines, AI agents, or multi-model chains, you want the Python versions.

  • Vector search and embeddings: Generating embeddings, indexing them in pgvector or Pinecone, and performing semantic search is a first-class operation in Python. Every vector database has a mature Python client. The .NET clients exist but are often community-maintained.

  • Data science APIs: If your product surfaces ML-derived analytics — clustering, anomaly detection, recommendation scores — Python’s scientific stack (NumPy, pandas, scikit-learn) is the right tool. Implementing these algorithms in C# is possible but there is no ecosystem equivalent.

  • NLP pipelines: spaCy for entity recognition, NLTK for text preprocessing, Sentence Transformers for semantic similarity — these have no serious .NET equivalents.

Python is NOT the right choice for:

  • Standard CRUD: FastAPI can do CRUD just as well as ASP.NET Core, but there is no reason to prefer it for record-level database operations. Use your existing .NET API or NestJS.

  • High-concurrency real-time systems: Python’s GIL (Global Interpreter Lock) is a real architectural constraint for CPU-bound concurrency. More on this below.

  • Complex business logic with deep domain models: Python’s type system is opt-in and structural. For complex domains with invariants, C#’s compiler-enforced type system catches more bugs. Python works, but you trade away compile-time guarantees.

  • Teams with no Python experience: FastAPI is approachable, but if your team has zero Python exposure and your use case does not specifically require the ML ecosystem, use NestJS. Learning Python and the ML ecosystem simultaneously while shipping product is challenging.


FastAPI vs. ASP.NET Core: The Mental Model Bridge

FastAPI is the closest thing Python has to ASP.NET Core Minimal APIs. If you can read Minimal API code, you can read FastAPI code within hours.

ASP.NET Core Minimal API:

// Program.cs
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddScoped<IProductService, ProductService>();
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

var app = builder.Build();

app.MapGet("/api/products/{id}", async (
    int id,
    IProductService service,
    CancellationToken ct) =>
{
    var product = await service.GetAsync(id, ct);
    return product is not null ? Results.Ok(product) : Results.NotFound();
});

app.Run();

FastAPI equivalent:

# main.py
from fastapi import FastAPI, Depends, HTTPException
from contextlib import asynccontextmanager
from services.product_service import ProductService

@asynccontextmanager
async def lifespan(app: FastAPI):
    # Startup (equivalent to builder.Services.AddScoped, etc.)
    yield
    # Shutdown

app = FastAPI(lifespan=lifespan)

def get_product_service() -> ProductService:
    return ProductService()  # DI is manual or via a library like dependency-injector

@app.get("/api/products/{product_id}", response_model=ProductDto)
async def get_product(
    product_id: int,
    service: ProductService = Depends(get_product_service)
):
    product = await service.get(product_id)
    if not product:
        raise HTTPException(status_code=404, detail="Product not found")
    return product

The structure is nearly identical. Route registration with path parameters, dependency injection, async handlers, automatic OpenAPI generation. The differences are syntax, not architecture.

Pydantic vs. C# DTOs:

Pydantic models are the Python equivalent of C# record types with Data Annotations — they define the shape of data and validate it at instantiation:

# Pydantic — Python
from pydantic import BaseModel, Field, field_validator
from datetime import datetime
from enum import Enum

class ProductStatus(str, Enum):
    active = "active"
    discontinued = "discontinued"
    out_of_stock = "out_of_stock"

class ProductDto(BaseModel):
    id: int
    name: str = Field(min_length=1, max_length=200)
    price: float = Field(gt=0)
    stock_count: int = Field(ge=0)
    status: ProductStatus
    created_at: datetime

    @field_validator("name")
    @classmethod
    def name_must_not_be_empty_after_strip(cls, v: str) -> str:
        stripped = v.strip()
        if not stripped:
            raise ValueError("Name cannot be only whitespace")
        return stripped
// Equivalent C# DTO
public enum ProductStatus { Active, Discontinued, OutOfStock }

public record ProductDto
{
    public int Id { get; init; }

    [Required, StringLength(200, MinimumLength = 1)]
    public string Name { get; init; } = string.Empty;

    [Range(double.Epsilon, double.MaxValue, ErrorMessage = "Price must be positive")]
    public decimal Price { get; init; }

    [Range(0, int.MaxValue)]
    public int StockCount { get; init; }

    public ProductStatus Status { get; init; }
    public DateTime CreatedAt { get; init; }
}

Pydantic validates at assignment time, similar to how ASP.NET Core validates model binding before your controller action runs. FastAPI feeds incoming request bodies through the Pydantic model automatically, returning a structured 422 Unprocessable Entity error if validation fails — analogous to ASP.NET Core’s automatic 400 Bad Request with [ApiController].

Python async/await — similar concept, different threading model:

# Python async/await looks familiar
async def get_embedding(text: str) -> list[float]:
    response = await client.embeddings.create(
        model="text-embedding-3-small",
        input=text
    )
    return response.data[0].embedding

But the underlying model is different. C#’s async/await runs on a thread pool — await suspends the current method and frees a thread to do other work, and when the awaited task completes, execution resumes on a thread pool thread. Python’s asyncio event loop is single-threaded: there is one loop per process, and await yields control back to that loop, which picks up the next ready coroutine. There are no multiple threads involved in standard Python async code.

This is fine for I/O-bound work (HTTP calls, database queries, LLM API calls). It is a problem for CPU-bound work — running a heavy ML model inference on the async event loop blocks the entire event loop until inference completes.

The solution is to run CPU-bound work in a thread pool executor or a process pool:

import asyncio
from concurrent.futures import ThreadPoolExecutor

executor = ThreadPoolExecutor(max_workers=4)

async def run_model_inference(text: str) -> str:
    loop = asyncio.get_event_loop()
    # Run the CPU-bound inference in a thread pool
    # This yields control back to the event loop while inference runs
    result = await loop.run_in_executor(
        executor,
        lambda: model_pipeline(text)  # synchronous, CPU-heavy call
    )
    return result

The GIL — Python’s biggest web serving limitation:

The Global Interpreter Lock (GIL) prevents multiple Python threads from executing Python bytecode simultaneously. In practice:

  • For I/O-bound async work: the GIL is released during I/O waits, so it rarely matters.
  • For CPU-bound work in threads: the GIL means your threads do not actually run in parallel on multiple cores.
  • The solution for CPU-intensive workloads is multiprocessing (separate Python processes, each with their own GIL) or using libraries like NumPy and PyTorch that release the GIL during their C-level operations (which they do).

For a FastAPI service handling LLM API calls (which are network I/O), the GIL is essentially irrelevant. For a service doing real-time model inference in pure Python, you need to think about multiprocessing or model servers like Triton Inference Server.

The practical guidance: FastAPI with async I/O and background tasks handles typical AI API workloads (calling OpenAI, running Hugging Face inference, semantic search) without GIL issues. If you need to saturate multiple CPU cores with Python bytecode, that is a specialized workload that requires a different architecture.


Type Safety Across the Python Boundary

Step 1: FastAPI + Pydantic Generates OpenAPI Automatically

FastAPI generates an OpenAPI spec from your Pydantic models and route definitions automatically — no additional setup required:

# main.py
from fastapi import FastAPI
from pydantic import BaseModel, Field
from typing import Optional
from enum import Enum

app = FastAPI(
    title="AI API",
    version="1.0.0",
    description="ML inference and NLP endpoints"
)

class SentimentResult(BaseModel):
    text: str
    sentiment: str  # "positive" | "negative" | "neutral"
    confidence: float = Field(ge=0.0, le=1.0)
    processing_time_ms: float

class ClassifyRequest(BaseModel):
    text: str = Field(min_length=1, max_length=10_000)
    model_id: Optional[str] = None  # Override default model

@app.post("/api/classify/sentiment", response_model=SentimentResult)
async def classify_sentiment(request: ClassifyRequest) -> SentimentResult:
    ...

The spec is available at http://localhost:8000/openapi.json. Feed this to openapi-typescript exactly as you would the ASP.NET Core Swashbuckle spec:

npx openapi-typescript http://localhost:8000/openapi.json \
  --output src/lib/ai-api-types.gen.ts

Your Next.js project can consume types from two generated files simultaneously:

// src/lib/api-types.gen.ts     — from ASP.NET Core / NestJS
// src/lib/ai-api-types.gen.ts  — from FastAPI

Step 2: Pydantic ↔ Zod Translation Guide

The schemas you write on the Python side have direct equivalents in Zod on the TypeScript side. Maintaining both in sync is the key discipline.

# Python / Pydantic
from pydantic import BaseModel, Field
from typing import Optional, List
from datetime import datetime
from enum import Enum

class Sentiment(str, Enum):
    positive = "positive"
    negative = "negative"
    neutral = "neutral"

class Entity(BaseModel):
    text: str
    label: str
    start: int
    end: int
    score: float = Field(ge=0.0, le=1.0)

class AnalysisResult(BaseModel):
    id: str
    original_text: str
    sentiment: Sentiment
    entities: List[Entity]
    summary: Optional[str] = None
    processed_at: datetime
    token_count: int = Field(ge=0)
// TypeScript / Zod — mirrors the Pydantic schema
import { z } from "zod";

// str Enum with (str, Enum) -> z.enum()
const SentimentSchema = z.enum(["positive", "negative", "neutral"]);

// Nested model -> nested z.object()
const EntitySchema = z.object({
  text: z.string(),
  label: z.string(),
  start: z.number().int().nonnegative(),
  end: z.number().int().nonnegative(),
  score: z.number().min(0).max(1),
});

// Optional[str] = None -> .nullable().optional() or .nullish()
// datetime -> z.string().datetime() with transform
const AnalysisResultSchema = z.object({
  id: z.string(),
  original_text: z.string(),
  sentiment: SentimentSchema,
  entities: z.array(EntitySchema),
  summary: z.string().nullable().optional(),   // Optional[str] = None
  processed_at: z.string().datetime().transform((v) => new Date(v)),
  token_count: z.number().int().nonnegative(),
});

export type AnalysisResult = z.infer<typeof AnalysisResultSchema>;

Pydantic to Zod field mapping:

PydanticZod equivalent
strz.string()
intz.number().int()
floatz.number()
boolz.boolean()
datetimez.string().datetime().transform(v => new Date(v))
Optional[T]z.T().nullable().optional()
List[T]z.array(z.T())
Dict[str, T]z.record(z.T())
str Enumz.enum([...values])
Field(ge=0, le=1).min(0).max(1)
Field(min_length=1).min(1)
Literal["a", "b"]z.literal("a").or(z.literal("b"))

Contract testing with schemathesis:

Schemathesis is a Python library that fuzzes your FastAPI endpoints against their own OpenAPI spec — it generates random valid and invalid inputs and verifies the responses match the declared schema:

pip install schemathesis
schemathesis run http://localhost:8000/openapi.json --checks all

Add to your Python CI:

# .github/workflows/python-api.yml
- name: Run schemathesis contract tests
  run: |
    schemathesis run http://localhost:8000/openapi.json \
      --checks all \
      --stateful=links \
      --max-examples=50

Step 3: Building a FastAPI ML Endpoint

Here is a complete FastAPI endpoint running a Hugging Face sentiment analysis model. This is the kind of code that has no clean equivalent in .NET:

# api/routes/classify.py
from fastapi import APIRouter, HTTPException, BackgroundTasks
from pydantic import BaseModel, Field
from typing import Optional
import asyncio
import time
import logging
from concurrent.futures import ThreadPoolExecutor

from transformers import pipeline
from functools import lru_cache

logger = logging.getLogger(__name__)
router = APIRouter(prefix="/api/classify", tags=["classification"])

# Thread pool for CPU-bound inference
_executor = ThreadPoolExecutor(max_workers=2)

# Models are heavy — cache them at module level
@lru_cache(maxsize=3)
def get_sentiment_pipeline(model_id: str):
    logger.info(f"Loading model: {model_id}")
    return pipeline(
        "sentiment-analysis",
        model=model_id,
        device=-1,  # -1 = CPU, 0 = first GPU
        truncation=True,
        max_length=512
    )

class SentimentRequest(BaseModel):
    text: str = Field(min_length=1, max_length=10_000)
    model_id: str = "distilbert-base-uncased-finetuned-sst-2-english"

class SentimentResponse(BaseModel):
    text: str
    label: str
    score: float = Field(ge=0.0, le=1.0)
    model_id: str
    processing_time_ms: float

@router.post("/sentiment", response_model=SentimentResponse)
async def classify_sentiment(request: SentimentRequest) -> SentimentResponse:
    start = time.perf_counter()

    pipe = get_sentiment_pipeline(request.model_id)

    # Run CPU-bound inference off the event loop
    loop = asyncio.get_event_loop()
    try:
        result = await loop.run_in_executor(
            _executor,
            lambda: pipe(request.text)[0]
        )
    except Exception as e:
        logger.error(f"Inference failed for model {request.model_id}: {e}")
        raise HTTPException(
            status_code=503,
            detail=f"Model inference failed: {str(e)}"
        )

    processing_ms = (time.perf_counter() - start) * 1000

    return SentimentResponse(
        text=request.text,
        label=result["label"].lower(),  # "POSITIVE" -> "positive"
        score=result["score"],
        model_id=request.model_id,
        processing_time_ms=processing_ms
    )
# main.py
from fastapi import FastAPI
from contextlib import asynccontextmanager
from api.routes import classify
import uvicorn

@asynccontextmanager
async def lifespan(app: FastAPI):
    # Warm up the default model on startup
    # (avoids cold start latency on first request)
    from api.routes.classify import get_sentiment_pipeline
    get_sentiment_pipeline("distilbert-base-uncased-finetuned-sst-2-english")
    yield
    # Cleanup on shutdown if needed

app = FastAPI(
    title="AI API",
    version="1.0.0",
    lifespan=lifespan
)

app.include_router(classify.router)

# CORS
from fastapi.middleware.cors import CORSMiddleware

app.add_middleware(
    CORSMiddleware,
    allow_origins=["http://localhost:3000", "https://yourapp.vercel.app"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

Run locally:

pip install fastapi uvicorn[standard] transformers torch
uvicorn main:app --reload --port 8000

The OpenAPI spec is now at http://localhost:8000/openapi.json.


Step 4: Streaming LLM Responses with Server-Sent Events

This is the most important section for any product involving LLMs. Streaming is not optional for LLM UX — users will not wait 10 seconds staring at a spinner while a response generates. The pattern is Server-Sent Events (SSE): the server sends a stream of chunks, and the frontend renders each chunk as it arrives.

The FastAPI streaming endpoint:

# api/routes/chat.py
from fastapi import APIRouter, Request
from fastapi.responses import StreamingResponse
from pydantic import BaseModel, Field
from typing import AsyncGenerator
from openai import AsyncOpenAI
import json

router = APIRouter(prefix="/api/chat", tags=["chat"])
client = AsyncOpenAI()  # Reads OPENAI_API_KEY from env

class ChatMessage(BaseModel):
    role: str   # "user" | "assistant" | "system"
    content: str

class ChatRequest(BaseModel):
    messages: list[ChatMessage] = Field(min_length=1)
    model: str = "gpt-4o-mini"
    max_tokens: int = Field(default=1024, ge=1, le=4096)

async def generate_stream(request: ChatRequest) -> AsyncGenerator[str, None]:
    """
    Yields Server-Sent Events formatted strings.
    SSE format: 'data: <json>\n\n'
    """
    try:
        async with client.beta.chat.completions.stream(
            model=request.model,
            messages=[m.model_dump() for m in request.messages],
            max_tokens=request.max_tokens,
        ) as stream:
            async for event in stream:
                if event.type == "content.delta":
                    # Each chunk is a small piece of the response text
                    chunk_data = json.dumps({
                        "type": "delta",
                        "content": event.delta
                    })
                    yield f"data: {chunk_data}\n\n"

                elif event.type == "content.done":
                    # Signal completion with usage information
                    final_data = json.dumps({
                        "type": "done",
                        "usage": {
                            "prompt_tokens": event.parsed_completion.usage.prompt_tokens
                                if event.parsed_completion.usage else None,
                            "completion_tokens": event.parsed_completion.usage.completion_tokens
                                if event.parsed_completion.usage else None,
                        }
                    })
                    yield f"data: {final_data}\n\n"

    except Exception as e:
        error_data = json.dumps({"type": "error", "message": str(e)})
        yield f"data: {error_data}\n\n"

@router.post("/stream")
async def chat_stream(request: ChatRequest):
    return StreamingResponse(
        generate_stream(request),
        media_type="text/event-stream",
        headers={
            "Cache-Control": "no-cache",
            "X-Accel-Buffering": "no",  # Disable Nginx buffering
            "Connection": "keep-alive",
        }
    )

The Next.js streaming chat UI — complete implementation:

// src/components/ChatInterface.tsx
"use client";

import { useState, useRef, useCallback } from "react";

interface Message {
  role: "user" | "assistant";
  content: string;
}

interface StreamEvent {
  type: "delta" | "done" | "error";
  content?: string;
  message?: string;
  usage?: {
    prompt_tokens: number | null;
    completion_tokens: number | null;
  };
}

export function ChatInterface() {
  const [messages, setMessages] = useState<Message[]>([]);
  const [input, setInput] = useState("");
  const [isStreaming, setIsStreaming] = useState(false);
  const [error, setError] = useState<string | null>(null);
  const abortControllerRef = useRef<AbortController | null>(null);

  const sendMessage = useCallback(async () => {
    if (!input.trim() || isStreaming) return;

    const userMessage: Message = { role: "user", content: input.trim() };
    const updatedMessages = [...messages, userMessage];

    setMessages(updatedMessages);
    setInput("");
    setIsStreaming(true);
    setError(null);

    // Add empty assistant message that will be filled as chunks arrive
    setMessages((prev) => [...prev, { role: "assistant", content: "" }]);

    abortControllerRef.current = new AbortController();

    try {
      const response = await fetch(
        `${process.env.NEXT_PUBLIC_AI_API_URL}/api/chat/stream`,
        {
          method: "POST",
          headers: { "Content-Type": "application/json" },
          body: JSON.stringify({
            messages: updatedMessages,
            model: "gpt-4o-mini",
          }),
          signal: abortControllerRef.current.signal,
        }
      );

      if (!response.ok) {
        throw new Error(`HTTP ${response.status}: ${await response.text()}`);
      }

      // ReadableStream for SSE processing
      const reader = response.body?.getReader();
      if (!reader) throw new Error("No response body");

      const decoder = new TextDecoder();
      let buffer = "";

      while (true) {
        const { done, value } = await reader.read();
        if (done) break;

        buffer += decoder.decode(value, { stream: true });
        const lines = buffer.split("\n");

        // Keep the last potentially incomplete line in the buffer
        buffer = lines.pop() ?? "";

        for (const line of lines) {
          if (!line.startsWith("data: ")) continue;

          const jsonStr = line.slice(6).trim();
          if (!jsonStr) continue;

          try {
            const event = JSON.parse(jsonStr) as StreamEvent;

            if (event.type === "delta" && event.content) {
              // Append chunk to the last (assistant) message
              setMessages((prev) => {
                const updated = [...prev];
                const last = updated[updated.length - 1];
                if (last.role === "assistant") {
                  updated[updated.length - 1] = {
                    ...last,
                    content: last.content + event.content,
                  };
                }
                return updated;
              });
            } else if (event.type === "error") {
              setError(event.message ?? "An error occurred");
            }
          } catch {
            // Malformed JSON chunk — skip
          }
        }
      }
    } catch (err) {
      if (err instanceof Error && err.name === "AbortError") {
        // User cancelled — that is fine
      } else {
        setError(err instanceof Error ? err.message : "Connection failed");
        // Remove the empty assistant message on error
        setMessages((prev) => prev.slice(0, -1));
      }
    } finally {
      setIsStreaming(false);
      abortControllerRef.current = null;
    }
  }, [input, messages, isStreaming]);

  const cancelStream = useCallback(() => {
    abortControllerRef.current?.abort();
  }, []);

  return (
    <div className="flex flex-col h-screen max-w-2xl mx-auto p-4">
      <div className="flex-1 overflow-y-auto space-y-4 mb-4">
        {messages.map((message, i) => (
          <div
            key={i}
            className={`p-3 rounded-lg ${
              message.role === "user"
                ? "bg-blue-100 ml-8"
                : "bg-gray-100 mr-8"
            }`}
          >
            <div className="text-xs text-gray-500 mb-1 font-medium">
              {message.role === "user" ? "You" : "Assistant"}
            </div>
            <div className="whitespace-pre-wrap">
              {message.content}
              {/* Blinking cursor on the last message while streaming */}
              {isStreaming && i === messages.length - 1 && (
                <span className="inline-block w-2 h-4 ml-0.5 bg-gray-700 animate-pulse" />
              )}
            </div>
          </div>
        ))}
        {error && (
          <div className="p-3 bg-red-50 border border-red-200 rounded-lg text-red-700 text-sm">
            {error}
          </div>
        )}
      </div>

      <div className="flex gap-2">
        <textarea
          value={input}
          onChange={(e) => setInput(e.target.value)}
          onKeyDown={(e) => {
            if (e.key === "Enter" && !e.shiftKey) {
              e.preventDefault();
              sendMessage();
            }
          }}
          placeholder="Type a message..."
          disabled={isStreaming}
          className="flex-1 border rounded-lg p-2 resize-none"
          rows={2}
        />
        {isStreaming ? (
          <button
            onClick={cancelStream}
            className="px-4 py-2 bg-red-500 text-white rounded-lg"
          >
            Stop
          </button>
        ) : (
          <button
            onClick={sendMessage}
            disabled={!input.trim()}
            className="px-4 py-2 bg-blue-500 text-white rounded-lg disabled:opacity-50"
          >
            Send
          </button>
        )}
      </div>
    </div>
  );
}

Step 5: A Complete RAG (Retrieval-Augmented Generation) Endpoint

To illustrate how the AI stack fits together, here is a FastAPI endpoint implementing a simple RAG pipeline — the pattern behind most AI-powered search and Q&A products:

# api/routes/rag.py
from fastapi import APIRouter, HTTPException
from fastapi.responses import StreamingResponse
from pydantic import BaseModel, Field
from typing import AsyncGenerator
from openai import AsyncOpenAI
import asyncio
import json
import numpy as np

# pgvector client
from psycopg2.extras import execute_values
import psycopg2

router = APIRouter(prefix="/api/rag", tags=["rag"])
client = AsyncOpenAI()

class RAGRequest(BaseModel):
    question: str = Field(min_length=1, max_length=2000)
    collection: str = "documents"
    top_k: int = Field(default=5, ge=1, le=20)

class SourceDocument(BaseModel):
    id: str
    title: str
    excerpt: str
    score: float

async def get_embedding(text: str) -> list[float]:
    response = await client.embeddings.create(
        model="text-embedding-3-small",
        input=text
    )
    return response.data[0].embedding

def vector_search(
    embedding: list[float],
    collection: str,
    top_k: int
) -> list[SourceDocument]:
    # PostgreSQL with pgvector extension
    conn = psycopg2.connect(...)  # connection pool in production
    cursor = conn.cursor()
    cursor.execute(
        """
        SELECT id, title, content,
               1 - (embedding <=> %s::vector) as similarity
        FROM documents
        WHERE collection = %s
        ORDER BY embedding <=> %s::vector
        LIMIT %s
        """,
        (embedding, collection, embedding, top_k)
    )
    rows = cursor.fetchall()
    return [
        SourceDocument(
            id=str(row[0]),
            title=row[1],
            excerpt=row[2][:500],  # First 500 chars as excerpt
            score=float(row[3])
        )
        for row in rows
    ]

async def generate_rag_stream(
    question: str,
    sources: list[SourceDocument]
) -> AsyncGenerator[str, None]:
    context = "\n\n".join(
        f"[{s.title}]\n{s.excerpt}" for s in sources
    )

    # First, yield the source documents so the UI can render them
    # before the answer starts streaming
    sources_event = json.dumps({
        "type": "sources",
        "sources": [s.model_dump() for s in sources]
    })
    yield f"data: {sources_event}\n\n"

    system_prompt = (
        "You are a helpful assistant. Answer the user's question based "
        "only on the provided context. If the context does not contain "
        "enough information, say so clearly.\n\n"
        f"Context:\n{context}"
    )

    async with client.beta.chat.completions.stream(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": question}
        ],
        max_tokens=1024,
    ) as stream:
        async for event in stream:
            if event.type == "content.delta":
                delta_event = json.dumps({
                    "type": "delta",
                    "content": event.delta
                })
                yield f"data: {delta_event}\n\n"
            elif event.type == "content.done":
                yield f"data: {json.dumps({'type': 'done'})}\n\n"

@router.post("/query")
async def rag_query(request: RAGRequest):
    # Get embedding for the question (async I/O — no GIL concern)
    embedding = await get_embedding(request.question)

    # Vector search (blocking DB call — run in executor)
    loop = asyncio.get_event_loop()
    sources = await loop.run_in_executor(
        None,
        lambda: vector_search(embedding, request.collection, request.top_k)
    )

    if not sources:
        raise HTTPException(
            status_code=404,
            detail="No relevant documents found in the collection"
        )

    return StreamingResponse(
        generate_rag_stream(request.question, sources),
        media_type="text/event-stream",
        headers={
            "Cache-Control": "no-cache",
            "X-Accel-Buffering": "no",
        }
    )

Step 6: Deployment on Render

Render is a common choice for hosting both Next.js and FastAPI services with minimal infrastructure overhead.

Dockerfile for FastAPI:

# Dockerfile.api
FROM python:3.12-slim

WORKDIR /app

# Install system dependencies for ML libraries
RUN apt-get update && apt-get install -y \
    build-essential \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Pre-download model weights at build time (avoids cold start)
RUN python -c "
from transformers import pipeline
pipeline('sentiment-analysis', 'distilbert-base-uncased-finetuned-sst-2-english')
"

COPY . .

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "2"]

requirements.txt:

fastapi==0.115.0
uvicorn[standard]==0.30.0
pydantic==2.8.0
openai==1.50.0
transformers==4.44.0
torch==2.4.0
numpy==1.26.0
psycopg2-binary==2.9.9
python-dotenv==1.0.1

Note --workers 2 in the uvicorn command. Each worker is a separate Python process with its own GIL, allowing true parallelism for handling concurrent requests. For ML inference, be careful: each worker loads the model into memory. Two workers with a 2GB model requires 4GB RAM. Size your Render instance accordingly.

render.yaml:

services:
  - type: web
    name: ai-api
    env: python
    dockerfilePath: ./Dockerfile.api
    healthCheckPath: /health
    envVars:
      - key: OPENAI_API_KEY
        sync: false  # Set in Render dashboard, not committed
      - key: DATABASE_URL
        fromDatabase:
          name: main-db
          property: connectionString

  - type: web
    name: frontend
    env: node
    buildCommand: npm ci && npm run build
    startCommand: npm run start
    envVars:
      - key: NEXT_PUBLIC_AI_API_URL
        value: https://ai-api.onrender.com
      - key: NEXT_PUBLIC_API_URL
        value: https://your-dotnet-api.azurewebsites.net

Key Differences

ConcernASP.NET CoreFastAPI (Python)
Type systemNominal, compiler-enforcedStructural, runtime-validated via Pydantic
ConcurrencyCLR thread pool, true parallelismasyncio event loop (single-threaded) + thread pool for CPU
OpenAPISwashbuckle attribute-basedAutomatic from Pydantic models and route decorators
DI containerIServiceCollection, lifetimesDepends() — functional, no container
MiddlewareIMiddleware, pipelineStarlette middleware, decorators
ValidationData Annotations, FluentValidationPydantic Field(), @field_validator
Error responsesProblemDetails (RFC 7807)HTTPException detail, 422 Unprocessable Entity
StreamingIAsyncEnumerable<T>, SignalRStreamingResponse + AsyncGenerator
Background tasksIHostedService, BackgroundServiceBackgroundTasks (per-request), Celery for queues
TestingxUnit, Moq, WebApplicationFactorypytest, pytest-asyncio, httpx.AsyncClient

Gotchas for .NET Engineers

Gotcha 1: Python Indentation Is Structural, Not Stylistic — and Type Hints Are Optional

In C#, indentation is a style convention enforced by your linter. In Python, indentation is the block delimiter. Wrong indentation is a SyntaxError or, worse, a logic error that the interpreter accepts but does not do what you intended.

# This looks like an if/else, but the else is not attached to the if
if condition:
    do_something()
  else:             # IndentationError — else is indented differently from if
    do_other()

# This runs do_other() unconditionally — not a syntax error, but wrong
if condition:
    do_something()
do_other()          # Not indented — runs regardless of condition

More relevant: Python type hints are not enforced at runtime. Pydantic enforces its own models, but arbitrary function signatures with type hints can be called with wrong types and Python will not complain:

def process_items(items: list[int]) -> int:  # Hint says list[int] -> int
    return sum(items)

result = process_items(["a", "b", "c"])  # Python accepts this
# sum() fails at runtime with TypeError, not at parse time

Fix: Use mypy or pyright as a static type checker in CI. FastAPI uses pyright internally. Without a static checker, type hints in Python are documentation — valuable, but not enforced by the interpreter.

pip install mypy
mypy api/ --ignore-missing-imports --strict

Gotcha 2: Pydantic v1 vs. v2 — Two Incompatible APIs

Pydantic underwent a complete rewrite in version 2 (released 2023) that broke compatibility with v1. FastAPI 0.100+ supports Pydantic v2. Many tutorials, Stack Overflow answers, and GitHub repositories still show v1 syntax.

The most common breaking change:

# Pydantic v1
class MyModel(BaseModel):
    name: str

    class Config:
        allow_population_by_field_name = True

    @validator("name")
    def name_must_be_valid(cls, v):
        return v.strip()

instance = MyModel(name="test")
data = instance.dict()  # v1 method
# Pydantic v2 — different decorator, different method names
class MyModel(BaseModel):
    name: str

    model_config = ConfigDict(populate_by_name=True)  # replaces class Config

    @field_validator("name")  # replaces @validator
    @classmethod
    def name_must_be_valid(cls, v: str) -> str:
        return v.strip()

instance = MyModel(name="test")
data = instance.model_dump()  # replaces .dict()
json_str = instance.model_dump_json()  # replaces .json()

If you install FastAPI and Pydantic from scratch, you get v2. If you install into an existing Python project with a requirements.txt that pins pydantic<2, you get v1. Check pip show pydantic to confirm the version. Do not mix v1 and v2 syntax — the error messages are often confusing.

Gotcha 3: Python datetime Is Timezone-Naive by Default

In C#, DateTime without a Kind is ambiguous (local vs. UTC), and DateTimeOffset makes the offset explicit. Python has the same distinction: a datetime without tzinfo is naive (no timezone), and one with tzinfo is aware.

from datetime import datetime, timezone

# Naive — no timezone information
naive = datetime.now()          # Local time, no tzinfo
naive_utc = datetime.utcnow()   # UTC by convention, but still no tzinfo!

# Aware — explicit UTC
aware = datetime.now(timezone.utc)   # Correct way to get current UTC time

The trap: datetime.utcnow() returns the current UTC time as a naive datetime. If you store this in a database and later compare it to an aware datetime, you get a TypeError. FastAPI and Pydantic v2 handle this correctly when you use the datetime type in a Pydantic model — Pydantic v2 validates that datetime strings from JSON are timezone-aware and will reject naive datetimes from non-aware inputs.

Fix: Always use datetime.now(timezone.utc) for current timestamps. Configure Pydantic to require timezone-aware datetimes:

from pydantic import BaseModel, field_validator
from datetime import datetime, timezone

class EventModel(BaseModel):
    occurred_at: datetime

    @field_validator("occurred_at")
    @classmethod
    def must_be_timezone_aware(cls, v: datetime) -> datetime:
        if v.tzinfo is None:
            raise ValueError("occurred_at must be timezone-aware")
        return v.astimezone(timezone.utc)  # Normalize to UTC

On the TypeScript side, z.string().datetime({ offset: true }) rejects strings without an explicit offset — add this to your Zod schemas for all date fields from Python APIs.

Gotcha 4: Python’s Async/Await Is Not Drop-In Parallelism

The single-threaded nature of asyncio means that code that blocks the event loop blocks all requests, not just the current one:

# This blocks the ENTIRE event loop for all concurrent requests
@app.post("/api/classify")
async def classify(request: ClassifyRequest):
    result = model(request.text)   # Synchronous, CPU-intensive — BLOCKS event loop
    return {"result": result}

# This is correct — runs blocking code off the event loop
@app.post("/api/classify")
async def classify(request: ClassifyRequest):
    loop = asyncio.get_event_loop()
    result = await loop.run_in_executor(None, lambda: model(request.text))
    return {"result": result}

The tell is the function signature: if you call a synchronous (blocking) function directly from an async def without await run_in_executor, you are blocking the event loop. Always check whether library functions you call are async (safe to await) or synchronous (must go to executor).

Alternatively, use asyncio.to_thread() (Python 3.9+) which is cleaner syntax:

import asyncio

result = await asyncio.to_thread(model, request.text)

Gotcha 5: None Is Not null in JSON — the Exclude-None Pattern

In Python, None is the absence of a value. When Pydantic serializes a model with None fields, those fields appear in the JSON output as null by default. If your TypeScript schema uses z.string().optional() (expecting the field to be absent, not null), the Zod parse will fail.

class SearchResult(BaseModel):
    id: str
    title: str
    description: Optional[str] = None  # Optional field

# Default serialization includes the None field:
# { "id": "1", "title": "Test", "description": null }

# Exclude None fields — matches TypeScript Optional behavior:
result.model_dump(exclude_none=True)
# { "id": "1", "title": "Test" }

Pick a convention and apply it consistently. Excluding None fields makes the JSON smaller and matches TypeScript’s optional semantics. Including them as null makes the schema more explicit. The critical thing is that your Zod schema and your Pydantic serialization agree:

// If Python sends null:
description: z.string().nullable().optional()

// If Python excludes the field entirely:
description: z.string().optional()

Configure FastAPI to exclude None globally if that is your convention:

app = FastAPI()
app.router.default_response_class_kwargs = {"exclude_none": True}

Hands-On Exercise

Goal: Build a FastAPI sentiment analysis endpoint and consume it from Next.js with streaming output and full type safety.

Prerequisites:

  • Python 3.12+, pip
  • Next.js 14 app
  • An OpenAI API key (or use Hugging Face’s free inference API)

Step 1 — Set up FastAPI:

mkdir ai-api && cd ai-api
python -m venv venv && source venv/bin/activate
pip install fastapi uvicorn openai pydantic python-dotenv

Step 2 — Write the sentiment endpoint:

Create main.py with the sentiment analysis endpoint from the “Building a FastAPI ML Endpoint” section above. Run it:

uvicorn main:app --reload --port 8000

Verify the OpenAPI spec at http://localhost:8000/openapi.json.

Step 3 — Generate TypeScript types:

cd ../your-nextjs-app
npx openapi-typescript http://localhost:8000/openapi.json \
  -o src/lib/ai-api-types.gen.ts

Open the generated file. Identify the SentimentResponse type. Compare its shape to your Pydantic model.

Step 4 — Write the Zod schema:

Write a Zod schema that mirrors your SentimentResponse Pydantic model. Include:

  • label as a union type: z.enum(["positive", "negative", "neutral"])
  • score as z.number().min(0).max(1)
  • processing_time_ms as a positive number

Step 5 — Add the streaming chat endpoint:

Add the streaming chat endpoint from the “Streaming LLM Responses” section to your FastAPI app. Copy the ChatInterface component into your Next.js app.

Test the streaming: you should see tokens appear progressively in the UI as they stream from the LLM API.

Step 6 — Introduce a contract violation:

Change the label field in your Pydantic SentimentResponse to sentiment_label. Run the type generator again. Observe where TypeScript now errors. Fix the Zod schema to match, observe the runtime validation fail if you send the old response shape. This is the feedback loop you rely on in production.

Step 7 — Add schemathesis:

pip install schemathesis
schemathesis run http://localhost:8000/openapi.json --checks all

Observe what schemathesis tests automatically. Note that it will send null, empty strings, very long strings, and other edge cases to your endpoints. If any cause 500 errors, that is a bug in your Pydantic validation — fix it.


Quick Reference

FastAPI ↔ ASP.NET Core mapping
  @app.get("/path")        ->  app.MapGet("/path", handler)
  @app.post("/path")       ->  app.MapPost("/path", handler)
  BaseModel                ->  record / class with Data Annotations
  Field(ge=0, le=1)        ->  [Range(0, 1)]
  Optional[T] = None       ->  T? with nullable reference types
  Depends(get_service)     ->  constructor injection
  HTTPException(404, ...)  ->  return Results.NotFound(...)
  lifespan context manager ->  IHostedService startup/shutdown

Pydantic to Zod field mapping
  str                      ->  z.string()
  int                      ->  z.number().int()
  float                    ->  z.number()
  bool                     ->  z.boolean()
  datetime                 ->  z.string().datetime({ offset: true }).transform(v => new Date(v))
  Optional[T] = None       ->  z.T().nullable().optional()
  List[T]                  ->  z.array(z.T())
  Dict[str, T]             ->  z.record(z.T())
  str Enum                 ->  z.enum(["val1", "val2"])
  Field(ge=0)              ->  .min(0)
  Field(min_length=1)      ->  .min(1) (for strings: .min(1))

Run FastAPI locally
  uvicorn main:app --reload --port 8000

OpenAPI spec URL (FastAPI)
  http://localhost:8000/openapi.json

Generate TS types from FastAPI
  npx openapi-typescript http://localhost:8000/openapi.json -o src/lib/ai-api-types.gen.ts

Streaming response pattern (FastAPI)
  return StreamingResponse(generator(), media_type="text/event-stream")
  yield f"data: {json.dumps(payload)}\n\n"

SSE client (TypeScript)
  const reader = response.body.getReader()
  // Loop: reader.read() -> decode -> split on "\n" -> parse "data: " lines

CPU-bound work off event loop (Python)
  await asyncio.to_thread(sync_function, arg1, arg2)
  # or: await loop.run_in_executor(executor, lambda: sync_function(arg1))

Pydantic v2 key method names (not v1)
  .model_dump()            (was .dict())
  .model_dump_json()       (was .json())
  @field_validator         (was @validator)
  model_config = ConfigDict(...) (was class Config)

Exclude None from response
  model_instance.model_dump(exclude_none=True)

Contract testing
  pip install schemathesis
  schemathesis run http://localhost:8000/openapi.json --checks all

Type checking Python
  pip install mypy
  mypy api/ --ignore-missing-imports

uvicorn production (multiple workers)
  uvicorn main:app --host 0.0.0.0 --port 8000 --workers 2
  # Each worker = separate process + separate GIL
  # Memory: model_size_GB * num_workers

Further Reading

4B.3 — The Polyglot Decision Framework: Choosing the Right Backend for Each Service

For .NET engineers who know: ASP.NET Core API design, service architecture, and the trade-offs of adding infrastructure complexity to a system You’ll learn: A structured decision framework for choosing between NestJS, ASP.NET Core, and FastAPI — and the architectural patterns that make polyglot systems maintainable rather than chaotic Time: 15-20 minutes


The most common mistake polyglot teams make is not choosing the wrong technology — it is having no framework for making the choice at all. Individual engineers solve problems in the language they know best. Teams drift toward technology accumulation: one service in NestJS because the frontend team built it, one in .NET because the backend team had a deadline, one in Python because someone wanted to try FastAPI. Six months later, you have operational complexity without any of the intended benefits.

This article gives you the decision framework. It is not a technology tutorial. No “here’s how to install NestJS” — you have Article 4.1 for that. This is the architectural reasoning layer: when you are standing in front of a new service requirement, how do you choose?


The .NET Way (What You Already Know)

In a mature .NET shop, the technology choice is usually already made. You write C#. You use ASP.NET Core. You use EF Core and SQL Server. The ecosystem is cohesive, your team has deep experience, and switching frameworks for a single service is a significant organizational event.

The architectural decisions you’ve made in .NET are good ones: vertical slice architecture, CQRS patterns, domain-driven design — these are language-agnostic. What the .NET ecosystem gives you is a stable platform where you can apply those patterns without ecosystem churn.

// In .NET, you don't think much about "which backend to use"
// You think about architecture within the backend you have:
// Domain model, service layer, repository pattern, validation, etc.

public class OrderService : IOrderService
{
    private readonly IOrderRepository _repository;
    private readonly IPaymentGateway _payment;
    private readonly ILogger<OrderService> _logger;

    // The framework choice (ASP.NET Core) is settled.
    // The architecture question is: how do we structure this domain?
    public async Task<OrderResult> PlaceOrderAsync(PlaceOrderCommand command)
    {
        // Complex domain logic — exactly where .NET excels
        var order = Order.Create(command.CustomerId, command.Items);
        await _repository.SaveAsync(order);
        var paymentResult = await _payment.ChargeAsync(order.Total, command.PaymentToken);
        // ...
    }
}

The challenge arrives when you expand beyond pure .NET: you are now building a system that needs a TypeScript frontend, a Python ML component, and potentially a NestJS BFF layer. The question “which backend?” is no longer trivially answered by “the one we know.”


The TypeScript Stack Way

When your team has TypeScript, .NET, and Python capabilities, each new service is a decision point. The framework for making that decision has two parts: a decision matrix that maps factors to technology choices, and a set of architectural patterns that define how those choices fit together.

The Decision Matrix

FactorNestJS (TypeScript)ASP.NET Core (.NET)FastAPI (Python)
DomainWeb apps, BFF, CRUD APIs, event-driven servicesEnterprise logic, financial, high-throughput, regulatoryML inference, NLP, AI agents, embedding, data pipelines
Team expertiseTeam is building TypeScript fluency; monorepo already existsTeam has deep .NET experience; complex domain logic already in C#Feature requires ML libraries that only exist in Python
Type safetytRPC end-to-end (best); no code gen requiredOpenAPI codegen (good); NSwag generates C# and TS clientsOpenAPI codegen (good); FastAPI auto-generates from Pydantic
PerformanceGood for I/O-bound; single-threaded event loopBest for CPU-bound + high I/O; multi-threaded CLRBest for ML (C extensions bypass GIL); slower for plain web
Ecosystemnpm — massive but fragmented; validate packages carefullyNuGet — curated, enterprise-grade, stablePyPI — dominant for data science and ML; variable quality for web
HostingRender, Vercel, any Node-capable platformRender, Azure, any .NET-capable platformRender (CPU); GPU providers (Lambda Labs, Modal) for inference
Monorepo fitShares TypeScript types with frontend; tRPC works naturallySeparate repo or API boundary; OpenAPI is the bridgeSeparate repo or API boundary; OpenAPI is the bridge
Cold startFast (Node.js)Fast (AOT compiled)Slow for ML (model loading: 10-30 seconds on first request)

The Decision Rules

The matrix gives you factors. These rules give you the decision:

Rule 1: Default to TypeScript for new services.

If you can implement the service in TypeScript without significant trade-offs, do it in TypeScript. The monorepo type-sharing benefit is real and compounding: shared Zod schemas, shared constants, shared utility functions, tRPC type inference across the full stack. Every additional language in your system multiplies your operational surface area. Earn the additional complexity.

Rule 2: Keep .NET for what .NET already owns.

“We already have it in .NET” is a valid architectural reason to keep it in .NET. Migration for migration’s sake is waste. If you have a mature, battle-tested ASP.NET Core API with years of domain logic, carefully tuned EF Core queries, and production-proven performance characteristics, the TypeScript replacement must provide measurable benefit — not just stylistic consistency. See Article 4B.1 for the specific cases where .NET is the right choice.

Rule 3: Python only for capabilities that don’t exist elsewhere.

Python’s web frameworks are slower and less ergonomic than NestJS or ASP.NET Core for standard API work. The GIL limits true parallelism. Python services should be narrowly scoped: ML inference, NLP pipelines, LLM orchestration, embedding generation. If the feature doesn’t require PyTorch, Hugging Face, LangChain, or similar Python-dominant ML libraries, it should not be in Python. See Article 4B.2 for the full picture.

Rule 4: Minimize the number of languages in any given service boundary.

A service is one codebase deployed as one unit. That service should be one language. The polyglot architecture is about choosing the right language per service — not mixing languages inside a single service.


The Four Architectural Patterns

Every system you build with this stack will be a variant of one of these four patterns. Each is appropriate in specific circumstances.

Pattern 1: All-TypeScript (The Default)

graph TD
    B1["Browser"]
    N1["Next.js (React / Server Components)"]
    API1["NestJS API"]
    DB1["PostgreSQL"]
    B1 --> N1
    N1 -->|"tRPC (full type safety, no code gen)"| API1
    API1 -->|"Prisma"| DB1

When to use it: New greenfield projects without ML requirements and without significant existing .NET investment. Your team is building TypeScript fluency. You want maximum type safety with minimum infrastructure complexity.

The advantage: A single pnpm workspace contains both the Next.js frontend and the NestJS backend. A type change in a Prisma model flows through the NestJS service into the tRPC router and appears as a TypeScript error in the Next.js component — all in one tsc invocation. No API spec generation. No code generation pipelines. No contract tests.

When it breaks down: You need ML inference (Python required), you have a significant existing .NET codebase worth preserving, or you have CPU-bound compute requirements that exceed Node.js’s capabilities.

// All-TypeScript: tRPC router in NestJS calls Prisma, frontend calls tRPC
// packages/api/src/routers/orders.router.ts
export const ordersRouter = router({
  getById: protectedProcedure
    .input(z.object({ id: z.string().cuid() }))
    .query(async ({ input, ctx }) => {
      return ctx.prisma.order.findUniqueOrThrow({
        where: { id: input.id, userId: ctx.user.id },
        include: { items: true },
      });
    }),
});

// apps/web/src/app/orders/[id]/page.tsx
// The return type of getById is inferred automatically — no generated code
export default async function OrderPage({ params }: { params: { id: string } }) {
  const order = await api.orders.getById.query({ id: params.id });
  // order is fully typed from the Prisma query — no casting, no any
  return <OrderDetail order={order} />;
}

Pattern 2: TypeScript Frontend + .NET API

graph TD
    B2["Browser"]
    N2["Next.js (React / Server Components)"]
    API2["ASP.NET Core API"]
    DB2["SQL Server / PostgreSQL"]
    B2 --> N2
    N2 -->|"OpenAPI-generated TypeScript types\n(orval / openapi-typescript)"| API2
    API2 -->|"Entity Framework Core"| DB2

When to use it: You have an existing ASP.NET Core API with significant business logic that is not worth rewriting. The .NET backend is mature, performant, and well-tested. The frontend is new and benefits from Next.js’s SSR and developer experience.

The advantage: You preserve years of .NET investment while gaining a modern, server-rendered TypeScript frontend. Clerk handles auth on the frontend; the .NET API validates Clerk JWTs on the server. See Article 4B.1 for the full implementation.

The operational cost: Types do not flow automatically. You need an OpenAPI spec generation step in your .NET CI pipeline, and a type regeneration step in your frontend CI pipeline. Breaking changes in .NET endpoints will fail the frontend build — which is the intended behavior.

// ASP.NET Core — generate OpenAPI spec as part of CI
// Program.cs
builder.Services.AddSwaggerGen(options =>
{
    options.SwaggerDoc("v1", new OpenApiInfo { Title = "API", Version = "v1" });
    // Include XML comments for richer spec generation
    var xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
    options.IncludeXmlComments(Path.Combine(AppContext.BaseDirectory, xmlFile));
});

// CI step: dotnet build && dotnet swagger tofile --output openapi.json
// Next.js frontend — consume the generated OpenAPI types
// orval.config.ts generates TanStack Query hooks from the spec
// apps/web/src/hooks/use-order.ts (generated by orval)
export const useGetOrderById = (id: string) =>
  useQuery({
    queryKey: ["order", id],
    queryFn: () => getOrderById(id), // typed from OpenAPI spec
  });

Pattern 3: TypeScript Frontend + Python AI Service

graph TD
    B3["Browser"]
    N3["Next.js (React / Server Components + SSE streaming)"]
    FA["FastAPI (Python)\nPyTorch / Hugging Face"]
    NS["NestJS (TypeScript)\nPrisma / PostgreSQL"]
    B3 --> N3
    N3 -->|"REST + Server-Sent Events"| FA
    N3 -->|"OpenAPI-generated TypeScript types"| NS

When to use it: A specific feature in your application requires ML inference — a recommendation engine, a summarization endpoint, an embedding service, an LLM-powered chat feature — and the rest of the application is well-served by TypeScript.

The advantage: Python does what Python is uniquely good at. NestJS handles everything else. The frontend gets consistent types from both services via OpenAPI generation.

The operational cost: Two separate deployments with separate CI pipelines. FastAPI cold start for ML model loading. Contract management between two independently deployed services.

# FastAPI — auto-generates OpenAPI spec from Pydantic models
# api/routes/summarize.py
from pydantic import BaseModel
from fastapi import APIRouter
from fastapi.responses import StreamingResponse

class SummarizeRequest(BaseModel):
    text: str
    max_length: int = 150

class SummarizeResponse(BaseModel):
    summary: str
    token_count: int

router = APIRouter()

@router.post("/summarize", response_model=SummarizeResponse)
async def summarize(request: SummarizeRequest) -> SummarizeResponse:
    # Hugging Face Transformers — only possible in Python
    result = summarizer(request.text, max_length=request.max_length)
    return SummarizeResponse(
        summary=result[0]["summary_text"],
        token_count=len(result[0]["summary_text"].split())
    )
// Next.js — consume the FastAPI OpenAPI spec via openapi-typescript
// apps/web/src/app/api/summarize/route.ts (proxy route for auth)
import type { components } from "@/types/python-api"; // generated

type SummarizeResponse = components["schemas"]["SummarizeResponse"];

export async function POST(request: Request) {
  const body = await request.json();
  const token = await getAuthToken(request);

  const res = await fetch(`${process.env.PYTHON_API_URL}/summarize`, {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      Authorization: `Bearer ${token}`,
    },
    body: JSON.stringify(body),
  });

  const data: SummarizeResponse = await res.json();
  return Response.json(data);
}

Pattern 4: Full Polyglot with NestJS BFF

graph TD
    B4["Browser"]
    N4["Next.js (React / Server Components)"]
    BFF["NestJS BFF (Backend-for-Frontend)"]
    ASPNET["ASP.NET Core API\n(enterprise logic, financials, EF Core)"]
    PY["FastAPI\n(ML inference, embeddings, LLM)"]
    B4 --> N4
    N4 -->|"tRPC or typed REST"| BFF
    BFF -->|"REST"| ASPNET
    BFF -->|"REST"| PY

When to use it: You have multiple backends that serve different capabilities. The frontend complexity of calling two or three separate APIs with different auth schemes, different response shapes, and independent failure modes becomes unmanageable. The NestJS BFF consolidates this into a single, typed API surface.

The advantage: The frontend talks to one service, in one language, with one auth scheme. The NestJS BFF handles response aggregation, circuit breaking, and frontend-specific data shaping. If the Python ML service is down, the BFF can return degraded data rather than failing the entire request.

The operational cost: This is the most complex pattern — three (or more) deployments, three CI pipelines, contract management across all service boundaries, and a dedicated team to maintain the BFF. Do not adopt this pattern until you have exhausted simpler options.

// NestJS BFF — aggregates data from .NET and Python backends
// apps/bff/src/orders/orders.service.ts
@Injectable()
export class OrdersService {
  constructor(
    private readonly dotnetClient: DotnetApiClient,     // typed HTTP client to .NET API
    private readonly pythonClient: PythonApiClient,     // typed HTTP client to FastAPI
  ) {}

  async getOrderWithInsights(orderId: string, userId: string) {
    // Fan out to both backends in parallel
    const [order, insights] = await Promise.allSettled([
      this.dotnetClient.orders.getById(orderId),
      this.pythonClient.analytics.getOrderInsights(orderId),
    ]);

    return {
      // Always present — from reliable .NET API
      order: order.status === "fulfilled" ? order.value : null,
      // Gracefully degraded — Python ML service may be slow or unavailable
      insights: insights.status === "fulfilled" ? insights.value : null,
    };
  }
}

Key Differences

Decision dimension.NET mindsetPolyglot mindset
Default technologyAlways C# / ASP.NET CoreTypeScript by default; .NET or Python earned by specific need
Team costOne language, unified expertiseEach additional language requires additional operational depth
Type sharingStrong (C# everywhere)Strongest with TypeScript (tRPC); contract-based with .NET/Python (OpenAPI)
Deployment unitsOne deployable per serviceSame — but with more services and more CI pipelines
Failure isolationService-level (you’re already doing this)Pattern 4 (BFF) adds circuit breaking at the aggregation layer
Migration pathRarely migrate the runtimeExplicitly plan the boundary — what stays in .NET, what moves

Gotchas for .NET Engineers

Gotcha 1: Polyglot complexity compounds faster than you expect

Each additional language in your system multiplies your debugging surface area, your deployment complexity, and your hiring requirements. A bug that involves a type mismatch between a FastAPI response and a Next.js component requires tracing through Python Pydantic serialization, OpenAPI spec generation, TypeScript type generation, and React component props. In a mono-language system, that’s one runtime and one type system.

The rule is not “don’t go polyglot.” The rule is: the benefit must be measurable and specific. “Python is better for ML” is measurable and specific. “Python feels more modern” is not.

Cost of adding a language:
  + Additional CI pipeline
  + Different logging/monitoring setup
  + Different deployment configuration
  + Different error message formats
  + Different auth integration pattern
  + Different developer onboarding
  + Separate OpenAPI contract maintenance
  ─────────────────────────────────────
  Total: significant — earn it deliberately

Gotcha 2: The BFF pattern is often premature

Pattern 4 (NestJS BFF in front of .NET and Python) is the right answer for a specific problem: multiple backend services with different contracts, calling the same frontend. But teams frequently adopt it prematurely because it looks architecturally clean on a diagram.

Before adding a NestJS BFF layer, ask: can the frontend call the .NET API and the Python API directly, with separate TanStack Query hooks and different base URLs? If yes, that is simpler. A BFF earns its existence when the frontend requires aggregated responses that would require multiple round trips, when auth needs to be normalized across services, or when you need to shield the frontend from backend instability.

The BFF pattern adds a deployment, a CI pipeline, and a network hop. It should solve a real problem that simpler approaches cannot.

Gotcha 3: “Microservices” and “polyglot” are different decisions

Microservices is an architecture pattern about service boundaries, independent deployment, and scalability. Polyglot is a decision about which language to use within a given service boundary. You can have microservices in a single language. You can have a monolith with a polyglot sidecar.

The most common confusion: teams adopt polyglot architecture and microservices simultaneously, attributing all complexity to one or the other. Separate the decisions. Can you achieve the same outcome with microservices in a single language? If yes, do that first. Polyglot adds language complexity on top of service complexity.

Gotcha 4: Python services have operational characteristics that surprise .NET engineers

ML model loading is not like application startup. A Python service that loads a 1GB Hugging Face model takes 15-30 seconds to become ready. If Render restarts that service (scale-down on idle, deployment, health check failure), users experience that cold start. .NET services and NestJS services start in under a second.

Mitigation strategies: keep ML services on “always on” Render plans with no idle scaling; use health checks that only pass after the model is loaded; pre-download model weights at build time rather than at runtime; consider model serving frameworks like Triton or vLLM that manage model lifecycle separately from the HTTP layer.

Gotcha 5: OpenAPI contract drift is silent until it breaks production

When your .NET API changes a response shape and you do not regenerate the TypeScript types in your frontend, the mismatch is invisible until runtime. The generated types say one thing; the actual JSON says another. TypeScript’s type system was satisfied at compile time — the type was generated from the spec, the spec was generated from the code, the code changed, but the spec was stale.

This is why CI pipeline ordering matters. The correct order: .NET build and test → generate OpenAPI spec → publish spec artifact → frontend downloads spec → regenerate types → type-check frontend → build frontend. If the spec does not update, the frontend type-check does not catch new incompatibilities. Article 4B.4 covers this pipeline in detail.


Hands-On Exercise

This exercise builds the decision muscle, not a deployable artifact. It is a structured analysis you should do for your current or most recent project.

The exercise: Apply the decision framework to a real service you are planning or maintaining.

Step 1: Write down the service requirement in one sentence.

Example: “An API endpoint that accepts a user’s recent purchase history and returns three product recommendations.”

Step 2: Fill in the decision matrix.

For each factor in the matrix, write down the relevant facts:

Domain: "Recommendation engine" — ML classification problem
Team expertise: "Two .NET engineers, one Python data scientist"
Type safety: "Need it — recommendation results must be typed on the frontend"
Performance: "200ms SLA — acceptable for Python with preloaded model"
Ecosystem: "Feature requires scikit-learn collaborative filtering — Python only"
Monorepo fit: "Existing Next.js + NestJS monorepo — Python would be separate"

Step 3: Apply the decision rules.

Rule 1 (Default TypeScript): Can we do this in TypeScript?
  → No. scikit-learn has no TypeScript equivalent. Rule 1 does not apply.

Rule 2 (Keep .NET): Do we have existing .NET code to preserve?
  → No existing .NET code for recommendations. Rule 2 does not apply.

Rule 3 (Python for ML only): Is this Python for ML capability?
  → Yes. FastAPI for the recommendation endpoint, narrowly scoped.

Rule 4 (One language per service): FastAPI service, Python only.
  → Agreed.

Decision: FastAPI service for the recommendation endpoint.
Architecture: Pattern 3 (TypeScript frontend + Python AI service).

Step 4: Identify the integration contract.

# The FastAPI service auto-generates OpenAPI from Pydantic models
# curl http://localhost:8000/openapi.json > specs/recommendations-api.json

# The frontend generates TypeScript types from the spec
# pnpm run generate:types:recommendations

Step 5: Identify the operational costs you are accepting.

Write them down explicitly. “We accept: Python deployment, separate CI pipeline, model cold start, OpenAPI contract maintenance.” If the list feels too long for the benefit, reconsider the decision.


Quick Reference

Decision Flowchart

flowchart TD
    Start["New service or feature requirement"]
    Q1{"Does it require\nPython ML libraries?\n(PyTorch, Hugging Face,\nscikit-learn, LangChain, etc.)"}
    FastAPI["FastAPI (Pattern 3 or 4)"]
    Q2{"Does it extend an existing,\nmature .NET codebase?\n(significant business logic,\nEF Core models, .NET-specific integrations)"}
    ASPNET["ASP.NET Core (Pattern 2 or 4)"]
    Default["Default: NestJS TypeScript (Pattern 1 or 2)"]

    Start --> Q1
    Q1 -->|Yes| FastAPI
    Q1 -->|No| Q2
    Q2 -->|Yes| ASPNET
    Q2 -->|No| Default

When Each Pattern is Right

PatternUse When
1: All-TypeScriptNew project, no ML, no significant .NET legacy
2: TS frontend + .NET APIExisting .NET API worth preserving; .NET-specific requirements
3: TS frontend + Python AISpecific ML/AI feature needed; rest of system is TypeScript
4: NestJS BFF + multiple backendsMultiple backends with different contracts; frontend aggregation required

The Rule of Thumb

SituationRecommendation
Can TypeScript do it without significant trade-offs?Do it in TypeScript
Have mature .NET code that works?Keep it in .NET
Need ML/AI capabilities?Use FastAPI, narrow scope
Considering BFF?Only if you have multiple backends to aggregate
Team disagrees on language choice?Pick TypeScript, revisit in 6 months with data

Operational Cost Checklist (per additional language)

Before adding a language, confirm you have accounted for:

  • Separate CI/CD pipeline
  • Separate deployment configuration on Render
  • Separate logging/monitoring integration (Sentry, Pino/structlog)
  • OpenAPI contract generation and publication
  • Type regeneration pipeline in frontend CI
  • Contract drift detection (breaking change alerts)
  • Additional developer onboarding documentation
  • Separate error message format handling
  • Auth token forwarding across the language boundary

If you cannot check all items, you are not ready to add the language.


Further Reading

  • [Article 4B.1 — Keeping .NET as Your API] — The complete case for preserving your ASP.NET Core investment with a TypeScript frontend
  • [Article 4B.2 — Python as the Middle Tier] — When and how to use FastAPI for AI/ML services
  • [Article 4B.4 — Cross-Language Type Contracts: OpenAPI as the Universal Bridge] — The CI/CD pipeline that keeps polyglot systems type-safe
  • Martin Fowler — BFF Pattern — Sam Newman’s original BFF pattern write-up, still the definitive reference

4B.4 — Cross-Language Type Contracts: OpenAPI as the Universal Bridge

For .NET engineers who know: Swashbuckle/NSwag for generating OpenAPI specs from ASP.NET Core, and NSwag/Swagger Codegen for generating C# clients from specs You’ll learn: How to use OpenAPI as the type-safety bridge across TypeScript, C#, and Python services — including the complete CI/CD pipeline that catches contract drift before it reaches production Time: 15-20 minutes


The .NET Way (What You Already Know)

In a pure .NET system, type safety across the client-server boundary is a solved problem. Swashbuckle generates an OpenAPI spec from your ASP.NET Core controllers. NSwag reads that spec and generates a typed C# client. The chain works because both ends are .NET.

// ASP.NET Core — controller with explicit response types
[HttpGet("{id}")]
[ProducesResponseType(typeof(OrderDto), 200)]
[ProducesResponseType(typeof(ProblemDetails), 404)]
public async Task<IActionResult> GetOrder([FromRoute] Guid id)
{
    var order = await _orderService.GetByIdAsync(id);
    return order is null ? NotFound() : Ok(order);
}

// DTO — this is the contract that flows to generated clients
public record OrderDto(
    Guid Id,
    string CustomerName,
    decimal Total,
    OrderStatus Status,
    DateTime CreatedAt
);
# CI pipeline generates the spec and publishes it as an artifact
dotnet swagger tofile --output openapi.json bin/Release/net9.0/MyApi.dll v1

# A consuming service runs NSwag to generate a typed client
# nswag run nswag.json
# → Generates MyApiClient.cs with full type safety

The key property: a breaking change in OrderDto (renaming CustomerName to Customer, removing a field, changing a type) causes a compile error in the consuming C# client. The type system finds the break before it reaches production.

In a polyglot system — TypeScript frontend, C# backend, Python ML service — you lose this guarantee unless you deliberately recreate it. The TypeScript compiler does not know what your C# OrderDto looks like. The Python Pydantic model is invisible to TypeScript. Without explicit contract machinery, the boundaries between languages are runtime minefields.


The TypeScript Stack Way

The Core Problem: Types Don’t Cross Language Boundaries

tRPC is the gold standard for TypeScript type safety — a change to a NestJS procedure’s return type causes a TypeScript error in the Next.js component that calls it, in the same monorepo, without code generation. But tRPC is TypeScript-to-TypeScript only. The moment a non-TypeScript service enters the picture, tRPC cannot help you.

TypeScript (NestJS) ──tRPC──→ TypeScript (Next.js)  ✓ Full type inference
C# (ASP.NET Core)   ──???──→ TypeScript (Next.js)   ✗ No type flow
Python (FastAPI)    ──???──→ TypeScript (Next.js)   ✗ No type flow

The solution is not to invent a new system. OpenAPI already exists. Every serious backend framework generates it. Every frontend code generation tool consumes it. The work is building the pipeline that keeps specs current and makes contract drift fail the build rather than silently break production.

How Each Backend Generates OpenAPI

Each language has native tooling that generates OpenAPI from its own type system. The output format (JSON or YAML, OpenAPI 3.x) is identical regardless of source language.

ASP.NET Core: Swashbuckle or NSwag

Swashbuckle is the default choice for ASP.NET Core OpenAPI generation. NSwag is the alternative with richer code generation features. Both read your controller attributes, ProducesResponseType declarations, and XML documentation comments.

// Program.cs — configure Swashbuckle
builder.Services.AddSwaggerGen(options =>
{
    options.SwaggerDoc("v1", new OpenApiInfo
    {
        Title = "Orders API",
        Version = "v1",
        Description = "Order management service"
    });

    // Include XML documentation for richer OpenAPI descriptions
    var xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
    var xmlPath = Path.Combine(AppContext.BaseDirectory, xmlFile);
    options.IncludeXmlComments(xmlPath);

    // Configure enum serialization — critical for TypeScript consumers
    options.UseAllOfToExtendReferenceSchemas();
});

// IMPORTANT: Configure JSON to serialize enums as strings
builder.Services.AddControllers().AddJsonOptions(options =>
{
    options.JsonSerializerOptions.Converters.Add(new JsonStringEnumConverter());
    // Without this, enums serialize as integers in the OpenAPI spec
    // TypeScript consumers get number literals instead of named unions
});
<!-- .csproj — enable XML documentation generation -->
<PropertyGroup>
  <GenerateDocumentationFile>true</GenerateDocumentationFile>
  <!-- Suppress "missing XML comment" warnings on internal types -->
  <NoWarn>$(NoWarn);1591</NoWarn>
</PropertyGroup>
# Generate spec as part of CI — add this step after dotnet build
dotnet swagger tofile \
  --output ./openapi/orders-api.json \
  bin/Release/net9.0/OrdersApi.dll \
  v1

For .NET 9+, consider the built-in Microsoft.AspNetCore.OpenApi package as an alternative to Swashbuckle, which is now maintained directly by the ASP.NET Core team.

FastAPI: Automatic from Pydantic Models

FastAPI generates OpenAPI automatically from Pydantic model definitions and route signatures. There is no separate generation step — the spec is available at /openapi.json while the server is running.

# api/models/order.py
from pydantic import BaseModel
from enum import Enum
from datetime import datetime
from decimal import Decimal
import uuid

class OrderStatus(str, Enum):
    # Using str Enum so FastAPI serializes as strings, not integers
    # This aligns with the .NET JsonStringEnumConverter behavior above
    PENDING = "pending"
    CONFIRMED = "confirmed"
    SHIPPED = "shipped"
    DELIVERED = "delivered"

class OrderDto(BaseModel):
    id: uuid.UUID
    customer_name: str
    total: Decimal
    status: OrderStatus
    created_at: datetime

    model_config = {
        # Use camelCase in JSON output to match TypeScript conventions
        "populate_by_name": True,
        "alias_generator": lambda s: "".join(
            w.capitalize() if i > 0 else w
            for i, w in enumerate(s.split("_"))
        )
    }

# api/routes/orders.py
from fastapi import APIRouter, HTTPException
from .models.order import OrderDto

router = APIRouter(prefix="/orders", tags=["orders"])

@router.get("/{order_id}", response_model=OrderDto)
async def get_order(order_id: uuid.UUID) -> OrderDto:
    """Retrieve an order by ID."""
    # FastAPI generates OpenAPI from the response_model, return type hint,
    # and the docstring — no additional configuration needed
    order = await order_service.get_by_id(order_id)
    if not order:
        raise HTTPException(status_code=404, detail="Order not found")
    return order
# Export the spec without a running server — useful in CI
# Using the fastapi CLI or a script that creates the app without starting uvicorn

python -c "
import json
from api.main import app
with open('openapi/ml-api.json', 'w') as f:
    json.dump(app.openapi(), f, indent=2)
"

NestJS: @nestjs/swagger from Decorators

NestJS generates OpenAPI from @nestjs/swagger decorators on controllers and DTOs. Unlike FastAPI, it does not auto-generate from type information alone — you must add the decorators.

// orders/dto/order.dto.ts
import { ApiProperty } from "@nestjs/swagger";

export enum OrderStatus {
  PENDING = "pending",
  CONFIRMED = "confirmed",
  SHIPPED = "shipped",
  DELIVERED = "delivered",
}

export class OrderDto {
  @ApiProperty({ format: "uuid" })
  id: string;

  @ApiProperty()
  customerName: string;

  @ApiProperty({ type: Number, format: "double" })
  total: number;

  @ApiProperty({ enum: OrderStatus })
  status: OrderStatus;

  @ApiProperty({ format: "date-time" })
  createdAt: Date;
}

// orders/orders.controller.ts
import { ApiTags, ApiOkResponse, ApiNotFoundResponse } from "@nestjs/swagger";

@ApiTags("orders")
@Controller("orders")
export class OrdersController {
  @Get(":id")
  @ApiOkResponse({ type: OrderDto })
  @ApiNotFoundResponse({ description: "Order not found" })
  async findOne(@Param("id", ParseUUIDPipe) id: string): Promise<OrderDto> {
    const order = await this.ordersService.findById(id);
    if (!order) throw new NotFoundException(`Order ${id} not found`);
    return order;
  }
}
// main.ts — configure Swagger generation
import { SwaggerModule, DocumentBuilder } from "@nestjs/swagger";
import { writeFileSync } from "fs";

async function bootstrap() {
  const app = await NestFactory.create(AppModule);

  const config = new DocumentBuilder()
    .setTitle("BFF API")
    .setVersion("1.0")
    .addBearerAuth()
    .build();

  const document = SwaggerModule.createDocument(app, config);

  // Write spec to disk for CI artifact upload
  writeFileSync("./openapi/bff-api.json", JSON.stringify(document, null, 2));

  SwaggerModule.setup("api/docs", app, document);
  await app.listen(3000);
}
# In CI — run the app briefly to generate the spec, then exit
# Or use a dedicated script that imports the app module without starting Kestrel
ts-node scripts/generate-openapi.ts
# → writes openapi/bff-api.json

Frontend Type Generation: Two Tools, Different Trade-offs

Once you have OpenAPI specs, you have two primary tools for generating TypeScript types from them.

openapi-typescript: Types Only

openapi-typescript generates TypeScript type definitions from an OpenAPI spec. It produces no runtime code — just types. You bring your own HTTP client.

pnpm add -D openapi-typescript
// package.json scripts
{
  "scripts": {
    "generate:types:dotnet": "openapi-typescript openapi/orders-api.json -o src/types/orders-api.d.ts",
    "generate:types:python": "openapi-typescript openapi/ml-api.json -o src/types/ml-api.d.ts",
    "generate:types": "pnpm run generate:types:dotnet && pnpm run generate:types:python"
  }
}
// Using the generated types with openapi-fetch
import createClient from "openapi-fetch";
import type { paths } from "@/types/orders-api";

const ordersClient = createClient<paths>({
  baseUrl: process.env.DOTNET_API_URL,
});

// Fully typed — the input and output shapes are inferred from the spec
const { data, error } = await ordersClient.GET("/orders/{id}", {
  params: { path: { id: "some-uuid" } },
});

// data is typed as the 200 response schema
// error is typed as the error response schemas

orval: Types + TanStack Query Hooks

orval goes further than openapi-typescript — it generates TanStack Query hooks with full types, error handling, and cache key management. This is the recommended tool when you want React Query integration without hand-writing the hooks.

pnpm add -D orval
// orval.config.ts — configure generation for each API
import { defineConfig } from "orval";

export default defineConfig({
  // .NET API — generates TanStack Query hooks
  ordersApi: {
    input: {
      target: "./openapi/orders-api.json",
    },
    output: {
      mode: "tags-split",    // one file per OpenAPI tag
      target: "./src/api/orders",
      schemas: "./src/types/orders",
      client: "react-query",
      httpClient: "fetch",
      override: {
        mutator: {
          path: "./src/lib/api-client.ts",  // custom fetch wrapper with auth
          name: "customFetch",
        },
      },
    },
  },

  // Python FastAPI — generates TanStack Query hooks
  mlApi: {
    input: {
      target: "./openapi/ml-api.json",
    },
    output: {
      mode: "tags-split",
      target: "./src/api/ml",
      schemas: "./src/types/ml",
      client: "react-query",
      httpClient: "fetch",
      override: {
        mutator: {
          path: "./src/lib/python-api-client.ts",
          name: "pythonApiFetch",
        },
      },
    },
  },
});
// src/lib/api-client.ts — the custom fetch mutator injected into generated hooks
// This is where you add auth headers, base URL, and error handling

export const customFetch = async <T>(
  url: string,
  options: RequestInit,
): Promise<T> => {
  const { getToken } = auth(); // Clerk auth helper
  const token = await getToken();

  const response = await fetch(`${process.env.NEXT_PUBLIC_API_URL}${url}`, {
    ...options,
    headers: {
      ...options.headers,
      "Content-Type": "application/json",
      ...(token ? { Authorization: `Bearer ${token}` } : {}),
    },
  });

  if (!response.ok) {
    // Normalize error shapes from different backends
    const errorBody = await response.json().catch(() => ({}));
    throw new ApiError(response.status, errorBody);
  }

  return response.json();
};
// Generated hook usage — fully typed, no hand-written code
import { useGetOrderById } from "@/api/orders/orders";

function OrderPage({ orderId }: { orderId: string }) {
  // Return type is inferred from the OpenAPI spec
  const { data: order, isLoading, error } = useGetOrderById(orderId);

  if (isLoading) return <Skeleton />;
  if (error) return <ErrorDisplay error={error} />;

  // order is typed as OrderDto from the OpenAPI spec
  return <div>{order.customerName}</div>;
}

The CI/CD Pipeline

The pipeline is the mechanism that makes contract safety automatic. Without it, developers forget to regenerate types, specs drift from reality, and the type system gives false confidence.

The Architecture

graph TD
    subgraph BackendCI["Backend CI (per service)"]
        B1["build"]
        B2["test"]
        B3["generate OpenAPI spec"]
        B4["upload spec artifact"]
        STORE["GitHub Releases / S3 / Artifact"]
        B1 --> B2 --> B3 --> B4 --> STORE
    end

    subgraph FrontendCI["Frontend CI"]
        F1["download specs"]
        F2["generate TS types"]
        F3["type-check"]
        F4["build"]
        FAIL["FAIL BUILD — breaking change detected"]
        F1 --> F2 --> F3 --> F4
        F3 -->|on type-check failure| FAIL
    end

    STORE -->|on spec publish| F1

Complete GitHub Actions Workflows

Backend: ASP.NET Core

# .github/workflows/dotnet-api.yml
name: .NET API CI

on:
  push:
    branches: [main]
    paths: ["services/orders-api/**"]
  pull_request:
    paths: ["services/orders-api/**"]

jobs:
  build-and-publish-spec:
    runs-on: ubuntu-latest
    defaults:
      run:
        working-directory: services/orders-api

    steps:
      - uses: actions/checkout@v4

      - name: Setup .NET
        uses: actions/setup-dotnet@v4
        with:
          dotnet-version: "9.0.x"

      - name: Restore dependencies
        run: dotnet restore

      - name: Build
        run: dotnet build --no-restore --configuration Release

      - name: Test
        run: dotnet test --no-build --configuration Release

      - name: Generate OpenAPI spec
        run: |
          dotnet tool restore
          dotnet swagger tofile \
            --output ../../openapi/orders-api.json \
            bin/Release/net9.0/OrdersApi.dll \
            v1

      - name: Detect breaking changes
        if: github.event_name == 'pull_request'
        uses: oasdiff/oasdiff-action@main
        with:
          base: "https://raw.githubusercontent.com/${{ github.repository }}/main/openapi/orders-api.json"
          revision: "openapi/orders-api.json"
          fail-on-diff: "ERR"  # Fail CI on breaking changes (non-breaking changes are warnings)

      - name: Upload OpenAPI spec artifact
        uses: actions/upload-artifact@v4
        with:
          name: orders-api-spec
          path: openapi/orders-api.json
          retention-days: 30

      # On main branch, commit the updated spec back to the repo
      # This keeps the spec in version control alongside the code
      - name: Commit updated spec
        if: github.ref == 'refs/heads/main'
        run: |
          git config user.name "github-actions[bot]"
          git config user.email "github-actions[bot]@users.noreply.github.com"
          git add ../../openapi/orders-api.json
          git diff --staged --quiet || git commit -m "chore: update orders-api OpenAPI spec [skip ci]"
          git push

Backend: FastAPI (Python)

# .github/workflows/python-api.yml
name: Python ML API CI

on:
  push:
    branches: [main]
    paths: ["services/ml-api/**"]
  pull_request:
    paths: ["services/ml-api/**"]

jobs:
  build-and-publish-spec:
    runs-on: ubuntu-latest
    defaults:
      run:
        working-directory: services/ml-api

    steps:
      - uses: actions/checkout@v4

      - name: Setup Python
        uses: actions/setup-python@v5
        with:
          python-version: "3.12"
          cache: pip

      - name: Install dependencies
        run: pip install -r requirements.txt

      - name: Run tests
        run: pytest tests/ -v

      - name: Generate OpenAPI spec
        run: |
          python -c "
          import json
          from api.main import app
          spec = app.openapi()
          with open('../../openapi/ml-api.json', 'w') as f:
              json.dump(spec, f, indent=2)
          "

      - name: Detect breaking changes
        if: github.event_name == 'pull_request'
        uses: oasdiff/oasdiff-action@main
        with:
          base: "https://raw.githubusercontent.com/${{ github.repository }}/main/openapi/ml-api.json"
          revision: "openapi/ml-api.json"
          fail-on-diff: "ERR"

      - name: Upload OpenAPI spec artifact
        uses: actions/upload-artifact@v4
        with:
          name: ml-api-spec
          path: openapi/ml-api.json
          retention-days: 30

      - name: Commit updated spec
        if: github.ref == 'refs/heads/main'
        run: |
          git config user.name "github-actions[bot]"
          git config user.email "github-actions[bot]@users.noreply.github.com"
          git add ../../openapi/ml-api.json
          git diff --staged --quiet || git commit -m "chore: update ml-api OpenAPI spec [skip ci]" && git push

Frontend: Type Generation and Validation

# .github/workflows/frontend.yml
name: Frontend CI

on:
  push:
    branches: [main]
    paths: ["apps/web/**", "openapi/**"]
  pull_request:
    paths: ["apps/web/**", "openapi/**"]

jobs:
  type-check-and-build:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: "20"

      - name: Setup pnpm
        uses: pnpm/action-setup@v3
        with:
          version: 9

      - name: Install dependencies
        run: pnpm install --frozen-lockfile

      # Regenerate TypeScript types from the OpenAPI specs in the repo
      # If specs have changed (from backend CI commits), this picks up the changes
      - name: Generate TypeScript types from OpenAPI specs
        run: pnpm run generate:types
        working-directory: apps/web

      # If types were regenerated AND they differ from what's committed,
      # that means a backend changed its spec without the frontend updating types.
      # Fail the build — the developer needs to run generate:types locally and commit.
      - name: Check generated types are up to date
        run: |
          git diff --exit-code apps/web/src/types/ apps/web/src/api/
          if [ $? -ne 0 ]; then
            echo "Generated types are out of date."
            echo "Run 'pnpm run generate:types' in apps/web and commit the changes."
            exit 1
          fi

      - name: TypeScript type check
        run: pnpm tsc --noEmit
        working-directory: apps/web
        # A breaking change in a backend OpenAPI spec will fail here:
        # the generated types changed, and existing code that depended on the
        # old types will now have TypeScript errors.

      - name: Lint
        run: pnpm lint
        working-directory: apps/web

      - name: Test
        run: pnpm test --run
        working-directory: apps/web

      - name: Build
        run: pnpm build
        working-directory: apps/web

Zod as a Runtime Safety Net

Generated TypeScript types from OpenAPI specs are compile-time constructs. They catch shape mismatches during development and CI. But at runtime, the type is erased — TypeScript cannot validate that the actual HTTP response matches the generated type.

This matters because:

  • The .NET server might serialize null where the spec says the field is required
  • Date fields might serialize as strings in one format in dev and another in production
  • A backend deploy might lag behind the spec, sending an old response shape

Zod closes this gap. You define a Zod schema that mirrors the generated type and parse every API response through it. If the runtime shape does not match, you get an explicit error — not a silent undefined that surfaces as a UI bug three screens later.

// src/lib/validated-fetch.ts
import { z, ZodSchema } from "zod";

export async function validatedFetch<T>(
  schema: ZodSchema<T>,
  url: string,
  options?: RequestInit,
): Promise<T> {
  const response = await fetch(url, options);

  if (!response.ok) {
    throw new ApiError(response.status, await response.text());
  }

  const json = await response.json();

  // Parse and validate — throws ZodError with field-level details on failure
  const result = schema.safeParse(json);

  if (!result.success) {
    // Log the actual response and the validation failure for debugging
    console.error("API response failed schema validation", {
      url,
      actualResponse: json,
      errors: result.error.flatten(),
    });
    // In development, throw hard. In production, you may want to degrade gracefully.
    throw new ContractViolationError(url, result.error);
  }

  return result.data;
}
// src/schemas/order.schema.ts
// Zod schema that mirrors the generated OpenAPI type
// Keep this in sync with the generated types — or generate it from the spec

import { z } from "zod";

export const OrderStatusSchema = z.enum([
  "pending",
  "confirmed",
  "shipped",
  "delivered",
]);

export const OrderSchema = z.object({
  id: z.string().uuid(),
  customerName: z.string().min(1),
  total: z.number().positive(),
  status: OrderStatusSchema,
  // Zod handles the string-to-Date transform that TypeScript types do not
  createdAt: z.string().datetime().transform((s) => new Date(s)),
});

export type Order = z.infer<typeof OrderSchema>;
// This type is equivalent to the generated OpenAPI type — use either in code
// Usage in a Server Component — Zod validates the .NET response at the boundary
async function getOrder(id: string): Promise<Order> {
  const { getToken } = auth();
  const token = await getToken();

  return validatedFetch(OrderSchema, `${process.env.DOTNET_API_URL}/orders/${id}`, {
    headers: { Authorization: `Bearer ${token}` },
  });
}

The Zod validation at the boundary is the runtime equivalent of contract testing. It does not replace type generation — it complements it. Generated types give you compile-time safety; Zod gives you runtime safety at the exact point where you cannot trust the other system.


Auth Token Forwarding Across Service Boundaries

Clerk issues a JWT to the frontend user. That JWT must be forwarded to each backend service for authentication. The forwarding mechanism differs depending on where the call is made.

// src/lib/server-fetch.ts
// Server Components run on the Next.js server — the Clerk session is available
// but the token must be explicitly forwarded; it does not travel as a cookie

import { auth } from "@clerk/nextjs/server";

export async function serverFetch<T>(
  schema: ZodSchema<T>,
  url: string,
  options?: RequestInit,
): Promise<T> {
  const { getToken } = auth();
  // Request a short-lived token for the target audience (optional but more secure)
  const token = await getToken({ template: "api-token" });

  return validatedFetch(schema, url, {
    ...options,
    headers: {
      ...options?.headers,
      "Content-Type": "application/json",
      ...(token ? { Authorization: `Bearer ${token}` } : {}),
    },
    // Disable Next.js fetch caching for auth-required requests
    cache: "no-store",
  });
}
// ASP.NET Core — validate the Clerk JWT
// Program.cs
builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
    .AddJwtBearer(options =>
    {
        // Clerk's JWKS endpoint — tokens are validated against Clerk's public keys
        options.Authority = $"https://{builder.Configuration["Clerk:Domain"]}";
        options.TokenValidationParameters = new TokenValidationParameters
        {
            ValidateAudience = false, // Clerk JWTs do not include an audience by default
            ValidateIssuer = true,
        };
    });
# FastAPI — validate the Clerk JWT using python-jose
from jose import jwt, JWTError
from fastapi import Depends, HTTPException, status
from fastapi.security import HTTPBearer
import httpx

security = HTTPBearer()

async def get_current_user(credentials = Depends(security)):
    try:
        # Fetch Clerk's JWKS to validate the token
        async with httpx.AsyncClient() as client:
            jwks = await client.get(f"https://{CLERK_DOMAIN}/.well-known/jwks.json")

        payload = jwt.decode(
            credentials.credentials,
            jwks.json(),
            algorithms=["RS256"],
            options={"verify_aud": False}
        )
        return payload
    except JWTError:
        raise HTTPException(
            status_code=status.HTTP_401_UNAUTHORIZED,
            detail="Invalid authentication credentials"
        )

@router.get("/orders/{order_id}")
async def get_order(
    order_id: uuid.UUID,
    user = Depends(get_current_user)  # JWT validation on every protected route
):
    # ...

Versioning Strategy

When you have multiple consumers of an OpenAPI spec (different frontend versions, different services), breaking changes must be coordinated. Three strategies, in order of complexity:

URL versioning — the simplest and most explicit:

/api/v1/orders/{id}   ← current stable version
/api/v2/orders/{id}   ← new version with breaking changes

Both versions are active simultaneously. The frontend migrates to v2 on its own schedule. The v1 spec and v2 spec are separate files: orders-api-v1.json and orders-api-v2.json.

Spec file versioning — commit the spec to version control with semantic versioning:

# In the OpenAPI spec root
openapi/
  orders-api.json          ← latest
  orders-api-1.2.0.json    ← pinned versions for consumers that cannot update
  ml-api.json

Frontend CI specifies which version to use:

// package.json — pin to a specific spec version for reproducible builds
{
  "scripts": {
    "generate:types:dotnet": "openapi-typescript openapi/orders-api-1.2.0.json -o src/types/orders-api.d.ts"
  }
}

Breaking change detection in CI — using oasdiff (already shown in the GitHub Actions workflows above), the CI pipeline fails when a PR introduces a breaking change to a published spec. Breaking changes are defined by the OpenAPI specification: removing a required field, changing a field’s type, removing an endpoint, changing a required query parameter.

# Local breaking change check before pushing
npx @oasdiff/oasdiff breaking \
  openapi/orders-api.json \
  openapi/orders-api-new.json
# Outputs: list of breaking changes with severity

Key Differences

Aspect.NET-only systemPolyglot with OpenAPI bridge
Type sharingCompile-time (shared C# assemblies)Contract-based (OpenAPI spec → generated types)
Breaking change detectionCompile error immediatelyCI pipeline failure (one build cycle lag)
Runtime validation.NET type system validates deserializationZod required for runtime safety
Auth propagationWindows Auth, ASP.NET Identity cookies — automaticJWT must be explicitly forwarded per request
Enum serializationInteger by default (configure JsonStringEnumConverter)String required for TypeScript union type generation
Date handlingDateTime serializes as ISO 8601TypeScript Date requires explicit Zod transform
Null vs. undefinedC# null → JSON nullTypeScript distinguishes null from undefined; Zod handles the mapping
Code generation cadenceOne-time setup with NSwagPer-backend-change; CI automation required

Gotchas for .NET Engineers

Gotcha 1: Enum serialization must be configured explicitly — on every backend

TypeScript’s code generation tools produce string union types from OpenAPI enum definitions: "pending" | "confirmed" | "shipped" | "delivered". This works correctly only when the backend serializes enums as strings.

ASP.NET Core serializes enums as integers by default. Without JsonStringEnumConverter, your OrderStatus.Confirmed becomes 1 in the JSON response. The generated TypeScript type says "confirmed" — you get a silent mismatch that produces undefined status values in your UI.

// Program.cs — REQUIRED for correct TypeScript enum generation
builder.Services.AddControllers().AddJsonOptions(options =>
{
    options.JsonSerializerOptions.Converters.Add(new JsonStringEnumConverter());
});

// Also annotate the enum to ensure Swashbuckle generates string values in the spec
[JsonConverter(typeof(JsonStringEnumConverter))]
public enum OrderStatus { Pending, Confirmed, Shipped, Delivered }

FastAPI with str Enum (as shown above) handles this correctly by default. NestJS with TypeScript enums generates strings naturally. The problem is specifically .NET with default configuration.

Gotcha 2: Date fields serialize as strings — Zod must perform the transform

ASP.NET Core serializes DateTime as an ISO 8601 string: "2026-02-18T14:30:00Z". FastAPI with Pydantic serializes datetime similarly. The OpenAPI spec declares these as type: string, format: date-time.

The generated TypeScript type from openapi-typescript for a date-time field is string, not Date. If you want a Date object, you must transform it explicitly.

// Without Zod — generated type is `string`
const order = await getOrder(id);
order.createdAt;  // string — "2026-02-18T14:30:00Z"
order.createdAt.toLocaleDateString(); // TypeError: not a function

// With Zod transform — you get a Date object
const OrderSchema = z.object({
  createdAt: z.string().datetime().transform((s) => new Date(s)),
});
const order = OrderSchema.parse(rawResponse);
order.createdAt.toLocaleDateString(); // Works

The alternative: keep dates as ISO 8601 strings throughout your application and only convert to Date at the display layer using date-fns or Intl.DateTimeFormat. This approach has fewer surprises — the string representation is what the API actually sends, and you never have a gap between what TypeScript thinks the type is and what exists at runtime.

Gotcha 3: The spec in your repo may be stale

When you commit the OpenAPI spec to version control (as shown in the CI workflows), there is a window between when the backend changes and when the CI commits the updated spec. A developer who pulls main after a backend merge but before the CI spec-update commit will have a stale spec locally.

The mitigation is the check generated types are up to date step in the frontend CI workflow. But locally, developers need discipline: pnpm run generate:types before running type-check or starting the dev server after pulling changes.

# Add this to your onboarding docs and your local git hook
# .husky/post-merge (runs after git pull / git merge)
#!/usr/bin/env sh
pnpm run generate:types --if-present

A stronger mitigation: instead of committing specs to the repo, publish them as GitHub Release assets or to an artifact store. The frontend CI downloads the latest spec from the artifact store at build time, ensuring it always regenerates from the current spec. This eliminates the “stale spec in repo” problem but adds a network dependency in CI.

Gotcha 4: orval generates a lot of files — treat them as build artifacts

Running pnpm run generate:types regenerates dozens of files in src/api/ and src/types/. These files are entirely derived from the OpenAPI spec — they have no handwritten content. Two schools of thought on whether to commit them:

Commit generated files: Simpler local development — no generation step before running. Diffs in PRs show exactly what changed. But: generated file diffs pollute PR reviews, and merge conflicts in generated files are painful.

Gitignore generated files: Cleaner PRs, no merge conflicts. But: every developer must run pnpm run generate:types before starting work, and the CI must regenerate before type-checking.

The recommended approach: commit the generated files, but configure your git diff tool to collapse them. In GitHub, the generated files are flagged with # Generated by orval header comments — future GitHub features will likely hide them from PR diffs automatically.

# .gitattributes — hint to GitHub that these are generated
src/api/**/*.ts linguist-generated=true
src/types/**/*.ts linguist-generated=true

Gotcha 5: null from C# is not the same as undefined in TypeScript

When ASP.NET Core returns a nullable field as null in JSON, the generated TypeScript type marks the field as T | null. In TypeScript, null and undefined are distinct. A property that is null is present with a null value; a property that is undefined is absent from the object.

This creates friction with optional chaining and nullish coalescing:

// Generated type from a nullable C# string field
interface Order {
  notes: string | null;  // Can be null, but NOT undefined — it's always present
}

const order = getOrder();

// This works correctly:
const hasNotes = order.notes !== null;

// This is a common mistake from .NET engineers:
const hasNotes2 = order.notes != null;  // True — but == also catches undefined
// In TypeScript, == null matches both null AND undefined
// In this case it's fine because notes is always present, but the intent is unclear

// Zod can normalize the shape if your app prefers undefined:
const OrderSchema = z.object({
  notes: z.string().nullable().transform((v) => v ?? undefined),
  // Now notes is string | undefined — more idiomatic TypeScript
});

Hands-On Exercise

This exercise sets up the complete type contract pipeline for a polyglot system with one .NET backend and one TypeScript frontend.

What you’ll build: The OpenAPI generation step for an ASP.NET Core project, and the type generation step for a Next.js frontend that consumes it.

Prerequisites: An existing ASP.NET Core Web API project and a Next.js project (or use a test scaffold).

Step 1: Add Swashbuckle to your .NET project

dotnet add package Swashbuckle.AspNetCore
dotnet add package Swashbuckle.AspNetCore.Cli
dotnet tool install --global Swashbuckle.AspNetCore.Cli
// Program.cs — add these lines
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddControllers().AddJsonOptions(options =>
{
    options.JsonSerializerOptions.Converters.Add(new JsonStringEnumConverter());
});

var app = builder.Build();
app.UseSwagger();

Step 2: Generate the spec from your .NET project

dotnet build --configuration Release
swagger tofile --output openapi.json bin/Release/net9.0/YourApi.dll v1
cat openapi.json | head -30
# Verify the spec looks correct — check enum values, field names, response shapes

Step 3: Install type generation tools in your Next.js project

cd apps/web
pnpm add -D openapi-typescript orval

Step 4: Configure orval

Create apps/web/orval.config.ts with the configuration shown earlier in this article, pointing input.target at your openapi.json file.

Step 5: Generate types and verify

pnpm orval
# Inspect the generated files:
ls src/api/
ls src/types/
# Open one generated hook — verify the return types match your DTO shape

Step 6: Use a generated hook in a component

Replace any existing hand-written fetch call with a generated hook. Run the TypeScript compiler and verify no errors:

pnpm tsc --noEmit

Step 7: Break the contract deliberately

In your .NET project, rename a field in a DTO (for example, CustomerName to Customer). Rebuild and regenerate the spec:

dotnet build && swagger tofile --output openapi.json bin/Release/net9.0/YourApi.dll v1

In your Next.js project, regenerate the types:

pnpm orval
pnpm tsc --noEmit
# You should see TypeScript errors where you reference order.customerName
# The type system found the breaking change

This is the experience you want your CI pipeline to automate: any backend contract break surfaces as a build failure in the frontend.


Quick Reference

Tool Selection

NeedToolNotes
Types only from OpenAPI specopenapi-typescriptLightest; bring your own HTTP client
Types + TanStack Query hooksorvalRecommended; generates everything
Typed fetch client (lighter than orval)openapi-fetchPairs with openapi-typescript
Breaking change detectionoasdiffUse in CI on PR branches
Runtime response validationzodComplements generated types; not a replacement

Backend OpenAPI Generation Commands

BackendGenerate commandOutput
ASP.NET Core (Swashbuckle)dotnet swagger tofile --output spec.json YourApi.dll v1spec.json
FastAPIpython -c "import json; from api.main import app; json.dump(app.openapi(), open('spec.json','w'))"spec.json
NestJS (@nestjs/swagger)ts-node scripts/generate-openapi.tsspec.json (from writeFileSync in script)

Common Configuration Mistakes

MistakeSymptomFix
Missing JsonStringEnumConverter in .NETTypeScript enum values are numbers, not stringsAdd options.JsonSerializerOptions.Converters.Add(new JsonStringEnumConverter())
Not Gitignoring .env in backendClerk secrets committed.env in .gitignore; use GitHub Secrets in CI
Date field typed as string in TypeScripttoLocaleDateString() throws TypeErrorAdd Zod .datetime().transform(s => new Date(s))
Generated types committed but not regenerated after spec changeTypeScript sees old types; new API behavior invisibleRun pnpm run generate:types after pulling changes
Nullable C# fields vs. optional TypeScript fieldsnull vs. undefined confusion in optional chainingNormalize in Zod schema or choose consistent convention
Auth token not forwarded from Server Component401 from .NET/Python APIUse serverFetch wrapper that calls auth().getToken()

The OpenAPI CI Pipeline (Summary)

flowchart TD
    A["Backend change pushed"]
    B["Backend CI:\nbuild → test → generate spec → commit spec to repo"]
    C["Frontend CI (triggered by spec change):\ndownload spec → pnpm orval → pnpm tsc → pnpm build"]
    D{"Breaking change?"}
    E["TypeScript error in tsc step → CI fails → developer fixes"]
    F["Build succeeds → deploy"]

    A --> B --> C --> D
    D -->|Yes| E
    D -->|No| F

Further Reading


Cross-reference: This article covers the type contract pipeline. For the architectural decision of when to use each backend, see Article 4B.3 (the polyglot decision framework). For the full .NET-as-API pattern with OpenAPI client generation, see Article 4B.1. For Python AI services with streaming responses, see Article 4B.2.

PostgreSQL for SQL Server Engineers

For .NET engineers who know: SQL Server, T-SQL, SSMS, Windows Authentication, and datetime2/uniqueidentifier types You’ll learn: Where PostgreSQL and SQL Server are functionally identical, where they differ in syntax and convention, and the specific gotchas that will slow you down on your first project Time: 15-20 min read


The .NET Way (What You Already Know)

You know SQL Server deeply. You write T-SQL fluently, you navigate SSMS without thinking, and you rely on Windows Authentication for local development because it just works. You know that IDENTITY(1,1) generates surrogate keys, that uniqueidentifier stores GUIDs, that nvarchar(MAX) holds Unicode text, and that datetime2(7) gives you nanosecond precision with UTC correctness when you need it.

You also know the SQL Server ecosystem: execution plans in SSMS, SET STATISTICS IO ON, sp_WhoIsActive, columnstore indexes, and how to configure a linked server. That tooling experience is real and valuable. None of it transfers directly to PostgreSQL — but the underlying concepts do.

PostgreSQL is not a simplified version of SQL Server. It is a different full-featured RDBMS with a different lineage (university research vs. Microsoft acquisition), different defaults, and in several areas — particularly JSON, extensibility, and standards compliance — it is ahead of SQL Server. The mental model shift is smaller than moving from a relational database to MongoDB, but larger than moving between SQL Server versions.


The PostgreSQL Way

Connection Strings and Authentication

SQL Server on Windows defaults to Windows Authentication (Integrated Security=True). You log in as your Windows user, no password in the connection string. PostgreSQL has no concept of Windows Authentication. Every connection uses a database username and password (or SSL certificates, LDAP, etc.). For local development you will almost always use postgres as the superuser with a password you set during installation.

SQL Server connection string:

Server=localhost;Database=MyApp;Integrated Security=True;TrustServerCertificate=True;

PostgreSQL connection string (libpq format, used by most drivers):

postgresql://myuser:mypassword@localhost:5432/myapp

PostgreSQL connection string (.NET Npgsql):

Host=localhost;Port=5432;Database=myapp;Username=myuser;Password=mypassword;

Render (cloud PostgreSQL) connection string:

postgresql://myuser:mypassword@dpg-xxxxx.oregon-postgres.render.com:5432/myapp_xxxx

Render provides internal and external connection strings. Use the internal URL from services running on Render (same datacenter, no TLS overhead). Use the external URL from your local machine or CI. Both use ?sslmode=require appended when connecting from external clients.

# External (local dev or CI)
postgresql://myuser:mypassword@dpg-xxxxx.oregon-postgres.render.com/myapp_xxxx?sslmode=require

# Internal (from another Render service)
postgresql://myuser:mypassword@dpg-xxxxx/myapp_xxxx

psql vs SSMS

SSMS is a rich GUI. psql is a command-line client. Most developers also use pgAdmin (GUI, free), TablePlus, or the Database panel in VS Code with the PostgreSQL extension.

SSMS Actionpsql Equivalent
Connect to serverpsql postgresql://user:pass@host/db
List databases\l
Switch database\c dbname
List tables\dt or \dt schema.*
Describe table\d tablename
List schemas\dn
Show execution planEXPLAIN ANALYZE SELECT ...
Run a script file\i /path/to/file.sql
Toggle expanded output\x
Quit\q

psql also has tab completion, command history, and \e to open your query in $EDITOR. For production queries and debugging you will spend time in psql. Get comfortable with \d — it is your metadata workhorse.

Schemas vs. Databases

This is one of the most disorienting conceptual differences.

In SQL Server, schemas are namespaces inside a database (dbo.Orders, sales.Orders). Schemas are cheap to create and commonly used to organize tables by domain. A SQL Server instance can have multiple databases, each with its own set of schemas.

In PostgreSQL, the word “schema” means the same thing — a namespace inside a database. But the architecture difference is in how you use them:

  • In SQL Server, cross-database queries are common: SELECT * FROM OtherDb.dbo.Orders.
  • In PostgreSQL, cross-database queries within a single server are not natively supported. Each database is fully isolated. If you need cross-database access you use dblink or postgres_fdw (foreign data wrappers), which is heavier than SQL Server’s linked servers.

The practical consequence: PostgreSQL shops tend to put everything in one database and use schemas for organization. SQL Server shops often have many databases on one instance.

-- PostgreSQL: create a schema and use it
CREATE SCHEMA sales;
CREATE TABLE sales.orders (id SERIAL PRIMARY KEY, total NUMERIC(12, 2));
SELECT * FROM sales.orders;

-- The default schema is "public"
CREATE TABLE products (id SERIAL PRIMARY KEY, name TEXT);
-- This is equivalent to:
CREATE TABLE public.products (id SERIAL PRIMARY KEY, name TEXT);

The search_path setting controls which schemas PostgreSQL checks when you use an unqualified table name, analogous to SQL Server’s default schema on a user.

SET search_path TO sales, public;
-- Now "orders" resolves to sales.orders, "products" to public.products

Data Type Mapping

SQL Server TypePostgreSQL TypeNotes
NVARCHAR(n)VARCHAR(n)PostgreSQL is always Unicode (UTF-8). No N prefix needed.
NVARCHAR(MAX)TEXTUnlimited length. TEXT and VARCHAR have identical performance in PG.
VARCHAR(n)VARCHAR(n)Same, but SQL Server VARCHAR is not Unicode by default.
INTINTEGER or INT4Identical 4-byte integer.
BIGINTBIGINT or INT8Identical 8-byte integer.
SMALLINTSMALLINT or INT2Identical.
BITBOOLEANPG uses TRUE/FALSE. SQL Server uses 1/0.
DECIMAL(p,s)NUMERIC(p,s)Functionally identical. PG also has DECIMAL as an alias.
FLOATDOUBLE PRECISION8-byte IEEE 754.
REALREAL4-byte IEEE 754.
UNIQUEIDENTIFIERUUIDPG stores UUID as 16 bytes, not a string. Use gen_random_uuid().
DATETIME2TIMESTAMPTZAlways store timestamps with time zone in PG. See gotchas.
DATETIMETIMESTAMPWithout time zone — avoid for most use cases.
DATEDATEIdentical.
TIMETIMEAvailable, but TIMETZ (time with time zone) is rarely useful.
BINARY/VARBINARYBYTEABinary data.
XMLXMLPG has XML support, but JSON/JSONB is more idiomatic.
NTEXT (deprecated)TEXT
MONEYNUMERIC(19,4)PG has a MONEY type but it is locale-dependent; avoid it.
ROWVERSION / TIMESTAMPNo direct equivalentUse xmin system column or an explicit updated_at column.
HIERARCHYIDNo equivalentModel with ltree extension or adjacency list.
GEOGRAPHY/GEOMETRYGEOMETRY via PostGISPostGIS is a mature extension, widely used.

Auto-increment / Identity:

-- SQL Server
CREATE TABLE orders (
    id INT IDENTITY(1,1) PRIMARY KEY,
    total DECIMAL(12,2) NOT NULL
);

-- PostgreSQL — old style (SERIAL is a shorthand, not a true type)
CREATE TABLE orders (
    id SERIAL PRIMARY KEY,
    total NUMERIC(12, 2) NOT NULL
);

-- PostgreSQL — modern style (SQL standard, preferred from PG 10+)
CREATE TABLE orders (
    id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
    total NUMERIC(12, 2) NOT NULL
);

-- UUID primary key (common in distributed systems)
CREATE TABLE orders (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    total NUMERIC(12, 2) NOT NULL
);

SERIAL creates a sequence and sets the default. GENERATED ALWAYS AS IDENTITY is the SQL standard and prevents accidental manual inserts to the identity column (unless you use OVERRIDING SYSTEM VALUE). Both work. New projects should prefer GENERATED ALWAYS AS IDENTITY.

JSON Support: PostgreSQL Is Ahead

SQL Server added JSON support in 2016 and it remains a string-parsing feature. There are no JSON-specific indexes, and querying JSON is done through functions like JSON_VALUE() that parse the string on each call.

PostgreSQL has two JSON types:

  • JSON — stores the JSON string verbatim, validates it, parses on every access
  • JSONB — stores the parsed binary representation. Supports indexing. Faster for reads. This is what you almost always want.
-- SQL Server JSON (stored as NVARCHAR, not a real type)
CREATE TABLE products (
    id INT IDENTITY(1,1) PRIMARY KEY,
    attributes NVARCHAR(MAX) CHECK (ISJSON(attributes) = 1)
);
-- Query — parses the string every time
SELECT JSON_VALUE(attributes, '$.color') FROM products WHERE id = 1;

-- PostgreSQL JSONB
CREATE TABLE products (
    id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
    attributes JSONB NOT NULL DEFAULT '{}'
);
-- Query — reads from binary representation
SELECT attributes->>'color' FROM products WHERE id = 1;

-- Index a specific field inside JSONB
CREATE INDEX idx_products_color ON products ((attributes->>'color'));

-- GIN index for arbitrary key lookups
CREATE INDEX idx_products_attrs ON products USING GIN (attributes);

-- Query using the GIN index (does this JSONB contain this key-value?)
SELECT * FROM products WHERE attributes @> '{"color": "red"}';

-- Update a single field without replacing the whole document
UPDATE products
SET attributes = attributes || '{"weight": 1.5}'::jsonb
WHERE id = 1;

-- Remove a key
UPDATE products
SET attributes = attributes - 'weight'
WHERE id = 1;

The -> operator extracts a JSON value as JSONB. The ->> operator extracts it as text. The @> operator checks containment (and uses GIN indexes). This is significantly more capable than SQL Server’s JSON functions.

SQL Server full-text search requires a Full-Text Catalog, CONTAINS(), and separate installation of the Full-Text Search feature. PostgreSQL full-text search is built in and uses tsvector/tsquery types.

-- PostgreSQL full-text search

-- Stored tsvector for performance
ALTER TABLE articles ADD COLUMN search_vector TSVECTOR;
UPDATE articles SET search_vector = to_tsvector('english', title || ' ' || body);
CREATE INDEX idx_articles_fts ON articles USING GIN (search_vector);

-- Keep it updated with a trigger
CREATE FUNCTION articles_search_trigger() RETURNS trigger AS $$
BEGIN
  NEW.search_vector := to_tsvector('english', NEW.title || ' ' || NEW.body);
  RETURN NEW;
END
$$ LANGUAGE plpgsql;

CREATE TRIGGER articles_search_update
  BEFORE INSERT OR UPDATE ON articles
  FOR EACH ROW EXECUTE FUNCTION articles_search_trigger();

-- Query
SELECT id, title,
       ts_rank(search_vector, query) AS rank
FROM articles,
     to_tsquery('english', 'database & performance') query
WHERE search_vector @@ query
ORDER BY rank DESC
LIMIT 20;

For more advanced needs (fuzzy matching, typo tolerance), use the pg_trgm extension which adds trigram-based similarity search and supports ILIKE indexes.

CTEs and Window Functions

Good news: CTEs and window functions are nearly identical between SQL Server and PostgreSQL. The syntax maps almost one-to-one.

-- SQL Server CTE
WITH ranked AS (
    SELECT
        customer_id,
        total,
        ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY total DESC) AS rn
    FROM orders
)
SELECT * FROM ranked WHERE rn = 1;

-- PostgreSQL CTE — identical syntax
WITH ranked AS (
    SELECT
        customer_id,
        total,
        ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY total DESC) AS rn
    FROM orders
)
SELECT * FROM ranked WHERE rn = 1;

One difference: in PostgreSQL 12 and earlier, CTEs were “optimization fences” — the query planner could not push predicates into them. Since PostgreSQL 12, non-recursive CTEs are inlined by default (same as SQL Server). You can force the old behavior with WITH cte AS MATERIALIZED (...).

PostgreSQL recursive CTEs use identical syntax (WITH RECURSIVE).

Stored Procedures vs. Functions

SQL Server has both stored procedures and user-defined functions with fairly clear distinctions. PostgreSQL blurs this line: historically PostgreSQL had only functions, and CREATE PROCEDURE was added in PostgreSQL 11 (with different semantics than SQL Server procedures).

FeatureSQL Server SPPostgreSQL FunctionPostgreSQL Procedure
Returns a result setYesYes (via RETURNS TABLE or SETOF)No
Returns scalar valueNoYesNo
Can manage transactionsYesNoYes (PG 11+)
Called with EXECYesNoCALL proc_name()
Called with SELECTNoYesNo
LanguagesT-SQLPL/pgSQL, SQL, PL/Python, PL/v8, etc.PL/pgSQL, SQL

In practice, PostgreSQL shops use functions for almost everything, reserving procedures for cases where you need explicit COMMIT/ROLLBACK inside the routine.

-- SQL Server stored procedure
CREATE PROCEDURE GetOrdersByCustomer
    @CustomerId INT
AS
BEGIN
    SELECT * FROM orders WHERE customer_id = @CustomerId;
END;

EXEC GetOrdersByCustomer @CustomerId = 42;

-- PostgreSQL equivalent — function returning a table
CREATE OR REPLACE FUNCTION get_orders_by_customer(p_customer_id INTEGER)
RETURNS TABLE (id INTEGER, total NUMERIC, created_at TIMESTAMPTZ)
LANGUAGE SQL
AS $$
    SELECT id, total, created_at
    FROM orders
    WHERE customer_id = p_customer_id;
$$;

SELECT * FROM get_orders_by_customer(42);

Performance Tuning: EXPLAIN ANALYZE

EXPLAIN ANALYZE is the PostgreSQL equivalent of SQL Server’s “Include Actual Execution Plan.” It shows the query plan with estimated and actual row counts, execution time, and which indexes were used.

-- SQL Server
-- Turn on in SSMS: Query > Include Actual Execution Plan (or Ctrl+M)
SELECT * FROM orders WHERE customer_id = 42;

-- PostgreSQL
EXPLAIN ANALYZE
SELECT * FROM orders WHERE customer_id = 42;

Sample output:

Index Scan using idx_orders_customer on orders  (cost=0.43..8.45 rows=3 width=72) (actual time=0.021..0.024 rows=3 loops=1)
  Index Cond: (customer_id = 42)
Planning Time: 0.123 ms
Execution Time: 0.041 ms

Use EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON) for full detail including cache hit ratios — the JSON format renders nicely in tools like explain.dalibo.com (paste the JSON for a visual tree).

Key plan node types you will encounter:

  • Seq Scan — full table scan (like a Table Scan in SQL Server). Usually bad on large tables.
  • Index Scan — uses a B-tree index to fetch rows. Good.
  • Index Only Scan — all needed columns are in the index (covering index). Best.
  • Bitmap Heap Scan — combines multiple index results then fetches heap pages. Efficient for low-selectivity queries.
  • Hash Join — joins using a hash table. Common for large tables without usable join indexes.
  • Nested Loop — like SQL Server’s Nested Loops. Good when outer table is small.

SQL Server has sp_WhoIsActive for monitoring active queries. PostgreSQL has pg_stat_activity:

-- Show active queries longer than 5 seconds
SELECT pid, now() - query_start AS duration, query, state
FROM pg_stat_activity
WHERE state != 'idle'
  AND query_start < now() - interval '5 seconds'
ORDER BY duration DESC;

-- Kill a query
SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE pid = 12345;

Key Differences

ConceptSQL ServerPostgreSQL
AuthenticationWindows Auth or SQL AuthPassword, certificate, LDAP (no Windows Auth)
Default clientSSMSpsql, pgAdmin, TablePlus
SchemasNamespaces within a databaseSame — but cross-database queries are not native
String typeNVARCHAR, VARCHAR (separate)TEXT or VARCHAR (always UTF-8)
BooleanBIT (0/1)BOOLEAN (TRUE/FALSE)
GUIDUNIQUEIDENTIFIERUUID
TimestampsDATETIME2TIMESTAMPTZ (always use with timezone)
Auto-incrementIDENTITY(1,1)SERIAL or GENERATED ALWAYS AS IDENTITY
JSONString-based functionsJSONB binary type with GIN indexes
Case sensitivityCase-insensitive by defaultCase-sensitive by default
String quotingSingle or double quotesSingle quotes only; double quotes for identifiers
Execution planSSMS GUIEXPLAIN ANALYZE
Activity monitorsp_WhoIsActivepg_stat_activity
Top N rowsSELECT TOP 10SELECT ... LIMIT 10
PaginationOFFSET ... FETCH NEXT ... ROWS ONLYLIMIT ... OFFSET ...
String concatenation+||
Null-safe equalityIS NULL / IS NOT NULLSame, plus IS DISTINCT FROM / IS NOT DISTINCT FROM

Gotchas for .NET Engineers

1. Case sensitivity in identifiers and string comparisons

SQL Server is case-insensitive by default for both identifiers and data. PostgreSQL is case-sensitive for data and for unquoted identifiers — but here is the trap: unquoted identifiers are silently lowercased.

-- This creates a column named "customerid" (lowercased)
CREATE TABLE orders (CustomerId INTEGER);

-- This creates a column named exactly "CustomerId" (quoted)
CREATE TABLE orders ("CustomerId" INTEGER);

-- Now you must always quote it
SELECT "CustomerId" FROM orders; -- works
SELECT CustomerId FROM orders;   -- works (lowercased to customerid, matches)
SELECT CUSTOMERID FROM orders;   -- also works (also lowercased)

The convention in PostgreSQL is to use snake_case for identifiers and never quote them. If you create tables with quoted mixed-case names (perhaps via a tool that mirrors your C# class names), you will regret it — every query will need quotes.

For string data comparisons, case sensitivity depends on the collation, but the default collation is case-sensitive:

-- PostgreSQL — case sensitive by default
SELECT * FROM users WHERE email = 'Admin@Example.com'; -- does NOT match 'admin@example.com'

-- Use ILIKE for case-insensitive pattern matching (SQL Server's LIKE is CI by default)
SELECT * FROM users WHERE email ILIKE 'admin@example.com';

-- Or LOWER()
SELECT * FROM users WHERE LOWER(email) = LOWER('Admin@Example.com');

2. TIMESTAMPTZ vs TIMESTAMP — always use TIMESTAMPTZ

SQL Server’s DATETIME2 stores a date and time with no timezone information, but by convention you store UTC. PostgreSQL has the same pattern available (TIMESTAMP WITHOUT TIME ZONE), but PostgreSQL’s TIMESTAMPTZ (TIMESTAMP WITH TIME ZONE) is fundamentally different: it converts the stored value to UTC on insert (based on the session’s TimeZone setting) and converts back on retrieval.

-- SQL Server convention: store UTC, no enforcement
INSERT INTO events (created_at) VALUES (GETUTCDATE()); -- you must remember to use GETUTCDATE()

-- PostgreSQL TIMESTAMPTZ: the database handles UTC conversion
SET TIME ZONE 'America/New_York';
INSERT INTO events (created_at) VALUES (NOW()); -- NOW() is timestamptz, stored as UTC

-- Retrieve: PostgreSQL converts from UTC to the session timezone automatically
SELECT created_at FROM events; -- shows Eastern time if session is Eastern

The gotcha is that TIMESTAMP (without timezone) is a lie: PostgreSQL stores the literal date/time bytes with no timezone semantics. If different clients write to a TIMESTAMP column using different timezones, your data is inconsistent with no way to recover. Always use TIMESTAMPTZ.

3. BOOLEAN is not BIT — do not use 1/0

If you generate SQL dynamically or write raw SQL in your application, you may reach for 1 and 0 for boolean values. PostgreSQL will reject them for BOOLEAN columns.

-- SQL Server
INSERT INTO users (is_active) VALUES (1);  -- works, BIT column
UPDATE users SET is_active = 0 WHERE id = 1; -- works

-- PostgreSQL — these fail
INSERT INTO users (is_active) VALUES (1);   -- ERROR: column is of type boolean
UPDATE users SET is_active = 0 WHERE id = 1; -- ERROR

-- PostgreSQL — correct
INSERT INTO users (is_active) VALUES (TRUE);
INSERT INTO users (is_active) VALUES ('true');  -- PostgreSQL accepts this
UPDATE users SET is_active = FALSE WHERE id = 1;

4. String concatenation uses ||, not +

-- SQL Server
SELECT first_name + ' ' + last_name AS full_name FROM users;
-- NULL + anything = NULL in SQL Server

-- PostgreSQL
SELECT first_name || ' ' || last_name AS full_name FROM users;
-- NULL || anything = NULL — same behavior

-- PostgreSQL CONCAT() ignores NULLs
SELECT CONCAT(first_name, ' ', last_name) AS full_name FROM users;

5. No implicit transaction for DDL

In SQL Server, DDL statements (like ALTER TABLE) can be wrapped in a transaction and rolled back. This is extremely useful for safe migrations. In PostgreSQL, DDL is also transactional — but many developers do not realize this because it is less commonly emphasized. The gotcha is ALTER TABLE ... ADD COLUMN with a non-null default in older PostgreSQL versions: it used to rewrite the entire table. Since PostgreSQL 11, adding a column with a constant default is instant. But adding a column with a volatile default (like NOW()) still requires a table rewrite.

6. SELECT TOP becomes LIMIT

-- SQL Server
SELECT TOP 10 * FROM orders ORDER BY created_at DESC;

-- PostgreSQL
SELECT * FROM orders ORDER BY created_at DESC LIMIT 10;

-- Pagination
-- SQL Server (2012+)
SELECT * FROM orders ORDER BY created_at DESC
OFFSET 20 ROWS FETCH NEXT 10 ROWS ONLY;

-- PostgreSQL
SELECT * FROM orders ORDER BY created_at DESC
LIMIT 10 OFFSET 20;

Hands-On Exercise

This exercise installs PostgreSQL locally and migrates a small SQL Server schema.

Setup:

# macOS
brew install postgresql@16
brew services start postgresql@16

# Create a database and user
psql postgres -c "CREATE USER myapp WITH PASSWORD 'secret';"
psql postgres -c "CREATE DATABASE myapp OWNER myapp;"

# Connect
psql postgresql://myapp:secret@localhost/myapp

Exercise — translate this SQL Server schema to PostgreSQL:

-- SQL Server (given)
CREATE TABLE customers (
    customer_id    INT             IDENTITY(1,1) PRIMARY KEY,
    external_ref   UNIQUEIDENTIFIER NOT NULL DEFAULT NEWID(),
    full_name      NVARCHAR(200)   NOT NULL,
    email          NVARCHAR(320)   NOT NULL,
    is_active      BIT             NOT NULL DEFAULT 1,
    created_at     DATETIME2(7)    NOT NULL DEFAULT SYSUTCDATETIME(),
    metadata       NVARCHAR(MAX),  -- stores JSON
    CONSTRAINT uq_customers_email UNIQUE (email)
);

CREATE INDEX IX_customers_email ON customers (email);

Solution — PostgreSQL equivalent:

CREATE TABLE customers (
    customer_id    INTEGER         GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
    external_ref   UUID            NOT NULL DEFAULT gen_random_uuid(),
    full_name      TEXT            NOT NULL,
    email          TEXT            NOT NULL,
    is_active      BOOLEAN         NOT NULL DEFAULT TRUE,
    created_at     TIMESTAMPTZ     NOT NULL DEFAULT NOW(),
    metadata       JSONB,
    CONSTRAINT uq_customers_email UNIQUE (email)
);

-- B-tree index on email (the UNIQUE constraint creates one, but explicit for clarity)
-- This index is redundant given the unique constraint above; shown for illustration
-- CREATE INDEX idx_customers_email ON customers (email);

-- GIN index on JSONB for arbitrary key lookups
CREATE INDEX idx_customers_metadata ON customers USING GIN (metadata);

Now write a query that:

  1. Returns all active customers created in the last 30 days
  2. Extracts a hypothetical metadata->>'plan' field
  3. Orders by created_at descending with a limit of 25
SELECT
    customer_id,
    full_name,
    email,
    metadata->>'plan' AS plan,
    created_at
FROM customers
WHERE is_active = TRUE
  AND created_at >= NOW() - INTERVAL '30 days'
ORDER BY created_at DESC
LIMIT 25;

Run EXPLAIN ANALYZE on that query. Note whether the planner uses the GIN index or the unique index on email. Try adding WHERE metadata @> '{"plan": "pro"}' and observe whether the GIN index is used.


Quick Reference

-- Connection
psql postgresql://user:pass@host:5432/dbname

-- List objects
\l          -- databases
\c dbname   -- connect to database
\dt         -- tables
\d table    -- describe table
\dn         -- schemas
\df         -- functions
\x          -- toggle expanded output

-- Data types
INTEGER / BIGINT / SMALLINT
TEXT / VARCHAR(n)
BOOLEAN                          -- TRUE / FALSE
NUMERIC(p,s)
UUID                             -- gen_random_uuid()
TIMESTAMPTZ                      -- NOW(), CURRENT_TIMESTAMP
JSONB                            -- attributes->>'key', attributes @> '{}'

-- Auto-increment
id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY
id SERIAL PRIMARY KEY            -- older style, still valid

-- String ops
'a' || 'b'                       -- concatenation (not +)
LOWER(col) / UPPER(col)
col ILIKE '%pattern%'            -- case-insensitive LIKE

-- JSON
col->>'key'                      -- extract as text
col->'key'                       -- extract as JSONB
col @> '{"key": "val"}'::jsonb   -- containment check (uses GIN index)
col || '{"newkey": 1}'::jsonb    -- merge/update
col - 'key'                      -- remove key

-- Limits and pagination
SELECT ... LIMIT 10 OFFSET 20;

-- Performance
EXPLAIN ANALYZE SELECT ...;
SELECT * FROM pg_stat_activity WHERE state != 'idle';
SELECT pg_terminate_backend(pid);

-- Timestamps
NOW()                            -- current timestamptz
CURRENT_TIMESTAMP                -- same
NOW() - INTERVAL '30 days'      -- 30 days ago
EXTRACT(YEAR FROM created_at)   -- year component
DATE_TRUNC('month', created_at) -- truncate to month

-- Full-text
to_tsvector('english', text_col)
to_tsquery('english', 'word & other')
tsvector @@ tsquery              -- matches?
ts_rank(tsvector, tsquery)       -- relevance score

Further Reading

Prisma: The Entity Framework of TypeScript

For .NET engineers who know: EF Core — DbContext, migrations, LINQ queries, Include(), ThenInclude(), AsNoTracking(), and Scaffold-DbContext You’ll learn: How Prisma maps onto every EF Core concept you already know, and where the abstractions diverge in ways that will catch you off guard Time: 15-20 min read


The .NET Way (What You Already Know)

EF Core is a Code-First ORM (or Database-First via scaffolding). You define your domain model as C# classes, configure the mapping in DbContext (via fluent API or data annotations), generate migrations with Add-Migration, and query using LINQ. The full cycle:

// 1. Define the model
public class Order
{
    public int Id { get; set; }
    public decimal Total { get; set; }
    public DateTime CreatedAt { get; set; }
    public int CustomerId { get; set; }
    public Customer Customer { get; set; }
    public List<OrderItem> Items { get; set; } = new();
}

// 2. Configure DbContext
public class AppDbContext : DbContext
{
    public DbSet<Order> Orders { get; set; }
    public DbSet<Customer> Customers { get; set; }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<Order>()
            .HasOne(o => o.Customer)
            .WithMany(c => c.Orders)
            .HasForeignKey(o => o.CustomerId);

        modelBuilder.Entity<Order>()
            .Property(o => o.Total)
            .HasPrecision(12, 2);
    }
}

// 3. Migrate
// dotnet ef migrations add AddOrders
// dotnet ef database update

// 4. Query
var orders = await context.Orders
    .Include(o => o.Customer)
    .Include(o => o.Items)
    .Where(o => o.Total > 100)
    .OrderByDescending(o => o.CreatedAt)
    .Take(20)
    .AsNoTracking()
    .ToListAsync();

This pattern — model, context, migration, query — is what Prisma replicates in TypeScript. The vocabulary is different but the structure is nearly isomorphic.


The Prisma Way

Installation

npm install prisma --save-dev
npm install @prisma/client

npx prisma init --datasource-provider postgresql

This creates:

  • prisma/schema.prisma — your schema file (replaces DbContext + model classes)
  • .env with a DATABASE_URL placeholder

The Schema File: DbContext + Models Combined

In EF Core, you have separate C# files for each entity class plus a DbContext to configure them. In Prisma, everything lives in a single schema.prisma file: the database connection, generator configuration, and all model definitions.

// prisma/schema.prisma

// 1. Generator — what Prisma should generate
generator client {
  provider = "prisma-client-js"
}

// 2. Datasource — your database connection
datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}

// 3. Models — equivalent to your EF Core entity classes + OnModelCreating config
model Customer {
  id        Int      @id @default(autoincrement())
  email     String   @unique
  name      String
  createdAt DateTime @default(now()) @map("created_at")
  orders    Order[]

  @@map("customers")
}

model Order {
  id         Int         @id @default(autoincrement())
  total      Decimal     @db.Decimal(12, 2)
  createdAt  DateTime    @default(now()) @map("created_at")
  customerId Int         @map("customer_id")
  customer   Customer    @relation(fields: [customerId], references: [id])
  items      OrderItem[]

  @@map("orders")
}

model OrderItem {
  id        Int     @id @default(autoincrement())
  orderId   Int     @map("order_id")
  productId Int     @map("product_id")
  quantity  Int
  price     Decimal @db.Decimal(10, 2)
  order     Order   @relation(fields: [orderId], references: [id])
  product   Product @relation(fields: [productId], references: [id])

  @@map("order_items")
}

model Product {
  id         Int         @id @default(autoincrement())
  name       String
  sku        String      @unique
  price      Decimal     @db.Decimal(10, 2)
  orderItems OrderItem[]

  @@map("products")
}

Schema attribute mapping:

EF Core ConfigurationPrisma Equivalent
[Key] / .HasKey()@id
[DatabaseGenerated(Identity)]@default(autoincrement())
[Required] / .IsRequired()Field with no ? (non-optional)
[MaxLength(200)] / .HasMaxLength()@db.VarChar(200)
.HasPrecision(12, 2)@db.Decimal(12, 2)
[Index(IsUnique = true)]@unique or @@unique([field1, field2])
@@Index / .HasIndex()@@index([field1, field2])
[Column("created_at")]@map("created_at")
[Table("orders")]@@map("orders")
HasOne().WithMany()@relation() on the FK side
Owned() / value objectsNot directly supported; use embedded JSON or separate table
.HasDefaultValue()@default(value)
.HasDefaultValueSql("now()")@default(now())
[Timestamp] / IsRowVersion()No direct equivalent; use updatedAt DateTime @updatedAt

Generating the Client: prisma generate

After editing the schema, you run:

npx prisma generate

This is equivalent to the EF Core model snapshot being updated. It reads your schema.prisma and generates a fully-typed TypeScript client in node_modules/@prisma/client. The generated client has:

  • TypeScript types for every model
  • Typed input/output types for every operation
  • Typed filter, orderBy, and include arguments

The generated types are not something you write — they are entirely inferred from your schema. This is the most significant Prisma advantage: your query types are always in sync with your schema without any manual DTO definition.

import { PrismaClient } from '@prisma/client';

const prisma = new PrismaClient();

That prisma instance gives you fully typed access to every model.

Migrations: prisma migrate

# Create a migration (equivalent to Add-Migration)
npx prisma migrate dev --name add_orders_table

# Apply to production (equivalent to Update-Database in CI/CD)
npx prisma migrate deploy

prisma migrate dev does three things:

  1. Generates a SQL migration file in prisma/migrations/
  2. Applies it to your development database
  3. Regenerates the Prisma client (prisma generate)

prisma migrate deploy applies pending migrations without generating or prompting — intended for CI/CD and production.

EF Core vs Prisma migration workflow:

StepEF CorePrisma
Create migrationdotnet ef migrations add <Name>npx prisma migrate dev --name <name>
Apply to devdotnet ef database update(done automatically by migrate dev)
Apply to productiondotnet ef database update (or bundle)npx prisma migrate deploy
View pendingdotnet ef migrations listnpx prisma migrate status
Rollbackdotnet ef database update <PreviousMigration>Manual SQL (see article 5.4)
Reset dev DBdotnet ef database drop && dotnet ef database updatenpx prisma migrate reset
Scaffold from existing DBdotnet ef dbcontext scaffoldnpx prisma db pull

Migration files are plain SQL stored in prisma/migrations/<timestamp>_<name>/migration.sql. Unlike EF Core’s C# migration files (with Up() and Down() methods), Prisma migration files are SQL-only and have no auto-generated Down(). See article 5.4 for rollback strategies.

Querying: Prisma Client vs EF Core LINQ

This is where the two ORMs feel most different in daily use. EF Core uses LINQ — method chaining that translates to SQL at runtime. Prisma uses a JavaScript object API where you pass nested objects describing your query.

EF Core vs Prisma API comparison:

OperationEF Core (C#)Prisma (TypeScript)
Find by PKcontext.Orders.FindAsync(id)prisma.order.findUnique({ where: { id } })
Find one (throws if missing)context.Orders.SingleAsync(...)prisma.order.findUniqueOrThrow({ where: { id } })
Find firstcontext.Orders.FirstOrDefaultAsync(...)prisma.order.findFirst({ where: {...} })
Get allcontext.Orders.ToListAsync()prisma.order.findMany()
Filter.Where(o => o.Total > 100)findMany({ where: { total: { gt: 100 } } })
Sort.OrderByDescending(o => o.CreatedAt)findMany({ orderBy: { createdAt: 'desc' } })
Limit.Take(20)findMany({ take: 20 })
Skip.Skip(40)findMany({ skip: 40 })
Select fields.Select(o => new { o.Id, o.Total })findMany({ select: { id: true, total: true } })
Include relation.Include(o => o.Customer)findMany({ include: { customer: true } })
Nested include.ThenInclude(c => c.Address)include: { customer: { include: { address: true } } }
Createcontext.Orders.Add(order); await context.SaveChangesAsync()prisma.order.create({ data: { ... } })
Update by PKcontext.Entry(order).State = Modified;prisma.order.update({ where: { id }, data: { ... } })
Upsert.AddOrUpdate() (not built-in, use FindAsync + update)prisma.order.upsert({ where, create, update })
Deletecontext.Orders.Remove(order)prisma.order.delete({ where: { id } })
Delete manycontext.Orders.RemoveRange(...)prisma.order.deleteMany({ where: { ... } })
Countcontext.Orders.CountAsync(...)prisma.order.count({ where: { ... } })
Aggregate.SumAsync(o => o.Total)prisma.order.aggregate({ _sum: { total: true } })
Group by.GroupBy(o => o.CustomerId)prisma.order.groupBy({ by: ['customerId'], _sum: { total: true } })

Complete examples side by side:

// EF Core — get orders with customer and items, filtered and paginated
var orders = await context.Orders
    .Include(o => o.Customer)
    .Include(o => o.Items)
        .ThenInclude(i => i.Product)
    .Where(o => o.Total > 100 && o.CreatedAt >= DateTime.UtcNow.AddDays(-30))
    .OrderByDescending(o => o.CreatedAt)
    .Skip(page * pageSize)
    .Take(pageSize)
    .AsNoTracking()
    .ToListAsync();
// Prisma — equivalent query
const orders = await prisma.order.findMany({
  where: {
    total: { gt: 100 },
    createdAt: { gte: new Date(Date.now() - 30 * 24 * 60 * 60 * 1000) },
  },
  include: {
    customer: true,
    items: {
      include: {
        product: true,
      },
    },
  },
  orderBy: { createdAt: 'desc' },
  skip: page * pageSize,
  take: pageSize,
});
// EF Core — create with nested relation
var order = new Order
{
    CustomerId = 1,
    Total = 249.99m,
    Items = new List<OrderItem>
    {
        new() { ProductId = 5, Quantity = 2, Price = 124.99m }
    }
};
context.Orders.Add(order);
await context.SaveChangesAsync();
// Prisma — create with nested relation
const order = await prisma.order.create({
  data: {
    customerId: 1,
    total: 249.99,
    items: {
      create: [
        { productId: 5, quantity: 2, price: 124.99 },
      ],
    },
  },
  include: { items: true },
});
// EF Core — upsert pattern
var existing = await context.Customers
    .FirstOrDefaultAsync(c => c.Email == email);
if (existing is null)
{
    context.Customers.Add(new Customer { Email = email, Name = name });
}
else
{
    existing.Name = name;
}
await context.SaveChangesAsync();
// Prisma — built-in upsert
const customer = await prisma.customer.upsert({
  where: { email },
  create: { email, name },
  update: { name },
});

Filtering Reference

Prisma’s where clause uses nested operator objects. The mapping from SQL/LINQ:

// Equality
where: { status: 'active' }

// Comparison
where: { total: { gt: 100 } }           // >
where: { total: { gte: 100 } }          // >=
where: { total: { lt: 100 } }           // <
where: { total: { lte: 100 } }          // <=
where: { total: { not: 100 } }          // !=

// String
where: { name: { contains: 'Smith' } }          // LIKE '%Smith%'
where: { name: { startsWith: 'Jo' } }           // LIKE 'Jo%'
where: { name: { endsWith: 'son' } }            // LIKE '%son'
where: { email: { contains: '@', mode: 'insensitive' } } // ILIKE

// Null checks
where: { deletedAt: null }              // IS NULL
where: { deletedAt: { not: null } }     // IS NOT NULL

// In / Not In
where: { status: { in: ['active', 'pending'] } }
where: { status: { notIn: ['deleted'] } }

// AND (default when multiple keys at same level)
where: { total: { gt: 100 }, status: 'active' }

// OR
where: { OR: [{ status: 'active' }, { status: 'pending' }] }

// AND explicit
where: { AND: [{ total: { gt: 100 } }, { createdAt: { gte: startDate } }] }

// NOT
where: { NOT: { status: 'deleted' } }

// Relation filter — orders that have at least one item
where: { items: { some: { quantity: { gt: 0 } } } }

// Relation filter — all items satisfy condition
where: { items: { every: { price: { gt: 0 } } } }

// Relation filter — no items match condition
where: { items: { none: { quantity: 0 } } }

Pagination: Cursor vs Offset

// Offset pagination (like EF Core Skip/Take)
const page2 = await prisma.order.findMany({
  skip: 20,
  take: 10,
  orderBy: { createdAt: 'desc' },
});

// Cursor pagination (more efficient for large datasets)
// First page — no cursor
const firstPage = await prisma.order.findMany({
  take: 10,
  orderBy: { id: 'asc' },
});

// Subsequent page — pass the last item's id as cursor
const nextPage = await prisma.order.findMany({
  take: 10,
  skip: 1,             // skip the cursor record itself
  cursor: { id: firstPage[firstPage.length - 1].id },
  orderBy: { id: 'asc' },
});

Cursor pagination is more efficient for deep pages on large tables because it uses an indexed seek rather than counting rows. The tradeoff: you cannot jump to an arbitrary page — only forward and back.

Transactions

// EF Core transaction
using var transaction = await context.Database.BeginTransactionAsync();
try
{
    context.Orders.Add(order);
    await context.SaveChangesAsync();
    context.Inventory.Update(inventoryUpdate);
    await context.SaveChangesAsync();
    await transaction.CommitAsync();
}
catch
{
    await transaction.RollbackAsync();
    throw;
}
// Prisma — interactive transaction (most flexible, like EF Core)
const [order, inventory] = await prisma.$transaction(async (tx) => {
  const newOrder = await tx.order.create({
    data: { customerId: 1, total: 99.99 },
  });

  const updatedInventory = await tx.inventory.update({
    where: { productId: 5 },
    data: { quantity: { decrement: 1 } },
  });

  return [newOrder, updatedInventory];
});

// Prisma — batch transaction (simpler, all succeed or all fail)
const [deleteOld, createNew] = await prisma.$transaction([
  prisma.order.deleteMany({ where: { createdAt: { lt: cutoffDate } } }),
  prisma.order.create({ data: { customerId: 1, total: 50 } }),
]);

The interactive transaction (async (tx) => { ... }) is the EF Core equivalent — you get a transaction-scoped client and can use any Prisma operations inside it. The batch transaction is a lighter syntax for a fixed list of operations.

Raw SQL

// EF Core raw SQL
var orders = await context.Orders
    .FromSqlRaw("SELECT * FROM orders WHERE customer_id = {0}", customerId)
    .ToListAsync();

// EF Core — raw SQL that doesn't return entities
await context.Database.ExecuteSqlRawAsync(
    "UPDATE orders SET status = 'archived' WHERE created_at < {0}", cutoffDate);
// Prisma — raw query returning typed results
const orders = await prisma.$queryRaw<Order[]>`
  SELECT * FROM orders WHERE customer_id = ${customerId}
`;

// Prisma — raw execute (no return value needed)
const count = await prisma.$executeRaw`
  UPDATE orders SET status = 'archived' WHERE created_at < ${cutoffDate}
`;

// Prisma — raw with Prisma.sql for dynamic queries (safe parameterization)
import { Prisma } from '@prisma/client';

const minTotal = 100;
const orders = await prisma.$queryRaw<Order[]>(
  Prisma.sql`SELECT * FROM orders WHERE total > ${minTotal}`
);

Always use tagged template literals (the backtick syntax) with $queryRaw and $executeRaw. Prisma automatically parameterizes the values, preventing SQL injection. If you need to interpolate SQL fragments dynamically (like a column name), use Prisma.raw() — but do so carefully and never with user input.

Seeding

// prisma/seed.ts
import { PrismaClient } from '@prisma/client';

const prisma = new PrismaClient();

async function main() {
  // Upsert so seed is idempotent
  const admin = await prisma.customer.upsert({
    where: { email: 'admin@example.com' },
    update: {},
    create: {
      email: 'admin@example.com',
      name: 'Admin User',
    },
  });

  console.log(`Seeded customer: ${admin.id}`);
}

main()
  .catch(console.error)
  .finally(() => prisma.$disconnect());
// package.json — register the seed script
{
  "prisma": {
    "seed": "ts-node prisma/seed.ts"
  }
}
npx prisma db seed
# Also runs automatically after: npx prisma migrate reset

Introspection: Scaffold-DbContext Equivalent

If you are inheriting an existing database (rather than building code-first), Prisma can generate the schema from the existing tables:

# Like Scaffold-DbContext — generates schema.prisma from your existing database
npx prisma db pull

# Then generate the client from the introspected schema
npx prisma generate

This fills schema.prisma with model definitions inferred from your tables. You will need to add relation fields manually (Prisma can infer them from foreign keys but the field names may need adjustment). After reviewing and editing the generated schema, you can begin using Prisma Client immediately.


Key Differences

ConceptEF CorePrisma
Schema definitionC# entity classes + DbContextSingle schema.prisma file
Client generationImplicit — types are your C# classesExplicit — prisma generate produces TS types
Migration filesC# Up()/Down() methodsPlain SQL files (no auto-rollback)
Query APILINQ method chainingNested JS object API
Change trackingBuilt-in (AsNoTracking() opt-out)None — every query is stateless
Lazy loadingSupported (with proxies)Not supported — always explicit include
N+1 preventionInclude() requiredinclude required — same discipline
Raw SQLFromSqlRaw(), ExecuteSqlRaw()$queryRaw, $executeRaw
TransactionsBeginTransactionAsync()$transaction()
Database-firstScaffold-DbContextprisma db pull
SeedingHasData() in OnModelCreating + migrationsSeparate seed.ts script
Connection poolADO.NET pool (managed by driver)Built into Prisma Query Engine
Multiple DB supportYes (different providers)Separate datasource blocks (experimental)

Gotchas for .NET Engineers

1. The Prisma Query Engine is a Rust binary — cold starts matter in serverless

When you run prisma generate, Prisma downloads a Rust-compiled query engine binary for your platform. This binary handles connection pooling and query execution. In a long-running process (a traditional Node.js API server), this is invisible. In serverless environments (AWS Lambda, Vercel Edge Functions, Cloudflare Workers), the binary adds cold start time and can cause deployment size issues.

For serverless, use Prisma Accelerate (Prisma’s connection pool proxy service) or PgBouncer separately, and consider the DATABASE_URL pointing to a pooler endpoint rather than the direct database:

# Direct connection (fine for long-running servers)
DATABASE_URL="postgresql://user:pass@host:5432/db"

# PgBouncer / Prisma Accelerate (for serverless)
DATABASE_URL="postgresql://user:pass@pgbouncer-host:6432/db?pgbouncer=true&connection_limit=1"

The connection_limit=1 tells Prisma not to try to maintain a pool itself (the pooler handles it).

2. N+1 is completely invisible without include

EF Core supports lazy loading (though it is opt-in and generally discouraged). Prisma does not. If you access a relation that you did not include, you get null or undefined — not an extra query.

// This returns orders with customer = null (not loaded)
const orders = await prisma.order.findMany();

for (const order of orders) {
  console.log(order.customer); // undefined — not an automatic extra query
}

// This is the N+1 anti-pattern if done manually
const orders = await prisma.order.findMany();
for (const order of orders) {
  // This is a separate DB query per order — N+1
  const customer = await prisma.customer.findUnique({
    where: { id: order.customerId },
  });
}

// Correct — use include
const orders = await prisma.order.findMany({
  include: { customer: true },
});

The silver lining: Prisma’s design makes N+1 obvious and explicit. You will not accidentally trigger lazy loading through a navigation property access. But you must plan your include structure upfront.

3. findUnique vs findFirst — the semantics are different

This trips up developers expecting something like FirstOrDefaultAsync to just work everywhere.

// findUnique — ONLY works on fields marked @id or @unique in schema
// Uses a unique index — fast
const customer = await prisma.customer.findUnique({
  where: { id: 42 },       // OK — @id
});

const customer2 = await prisma.customer.findUnique({
  where: { email: 'foo@example.com' }, // OK — @unique
});

// This does NOT compile — 'name' is not @id or @unique
const customer3 = await prisma.customer.findUnique({
  where: { name: 'Chris' }, // TypeScript error
});

// findFirst — works on any field, returns first match (or null)
// May do a full table scan if field is not indexed
const customer4 = await prisma.customer.findFirst({
  where: { name: 'Chris' },
  orderBy: { createdAt: 'desc' },
});

findUnique gives you a TypeScript-level guarantee that you are querying on a unique field and that the result is either the record or null (never a list). findFirst is the general-purpose single-record fetch. Use findUnique whenever the field is @id or @unique — it is both faster and more semantically correct.

4. Prisma does not track changes — every update requires explicit fields

EF Core’s change tracker lets you load an entity, mutate its properties, and call SaveChangesAsync(). Prisma has no change tracker. Every update call must specify the new values explicitly.

// EF Core — mutate and save
var customer = await context.Customers.FindAsync(id);
customer.Name = "New Name";
customer.UpdatedAt = DateTime.UtcNow;
await context.SaveChangesAsync(); // EF Core computes the UPDATE from change tracking
// Prisma — must specify every field you want to change
const customer = await prisma.customer.update({
  where: { id },
  data: {
    name: 'New Name',
    updatedAt: new Date(),
  },
});

// You cannot do this:
const customer = await prisma.customer.findUnique({ where: { id } });
customer.name = 'New Name'; // mutating the object does nothing to the database
await prisma.customer.save(customer); // this method does not exist

This is actually safer in many respects — there are no accidental implicit saves — but it requires a mindset shift if you are used to the entity-mutation pattern.

5. Decimal requires special handling in TypeScript

PostgreSQL’s NUMERIC/DECIMAL type maps to Prisma’s Decimal type, which uses the decimal.js library internally. This is not a native JavaScript number — it is an object.

// Prisma returns Decimal objects for Decimal fields, not numbers
const order = await prisma.order.findUnique({ where: { id: 1 } });
console.log(typeof order.total); // 'object', not 'number'
console.log(order.total.toString()); // '249.99'
console.log(order.total.toNumber()); // 249.99 (native JS number, precision loss risk)

// Comparison
if (order.total.greaterThan(100)) { ... }    // correct
if (order.total > 100) { ... }               // does not work as expected
if (order.total.toNumber() > 100) { ... }    // works but loses precision for large values

// When creating/updating, pass a string or Decimal
await prisma.order.create({
  data: {
    total: new Decimal('249.99'), // or just '249.99' — Prisma accepts strings
    customerId: 1,
  },
});

For financial data, treat Decimal values as opaque objects and use the decimal.js API for arithmetic. Serialize them to strings for API responses (total.toString()), not toNumber().


Hands-On Exercise

Build a typed data access layer for a blog system.

Schema:

// prisma/schema.prisma
generator client {
  provider = "prisma-client-js"
}

datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}

model User {
  id        Int       @id @default(autoincrement())
  email     String    @unique
  name      String
  posts     Post[]
  createdAt DateTime  @default(now()) @map("created_at")

  @@map("users")
}

model Post {
  id          Int       @id @default(autoincrement())
  title       String
  content     String
  published   Boolean   @default(false)
  authorId    Int       @map("author_id")
  author      User      @relation(fields: [authorId], references: [id])
  tags        Tag[]
  createdAt   DateTime  @default(now()) @map("created_at")
  updatedAt   DateTime  @updatedAt @map("updated_at")

  @@index([authorId])
  @@map("posts")
}

model Tag {
  id    Int    @id @default(autoincrement())
  name  String @unique
  posts Post[]

  @@map("tags")
}
npx prisma migrate dev --name init

Exercise — implement these functions:

// src/blog-repository.ts
import { PrismaClient, Prisma } from '@prisma/client';

const prisma = new PrismaClient();

// 1. Get published posts with author, paginated, optional tag filter
export async function getPublishedPosts(options: {
  page: number;
  pageSize: number;
  tag?: string;
}) {
  const { page, pageSize, tag } = options;

  return prisma.post.findMany({
    where: {
      published: true,
      ...(tag ? { tags: { some: { name: tag } } } : {}),
    },
    include: {
      author: { select: { id: true, name: true } },
      tags: true,
    },
    orderBy: { createdAt: 'desc' },
    skip: page * pageSize,
    take: pageSize,
  });
}

// 2. Create a post with tags (create tags if they don't exist)
export async function createPost(data: {
  title: string;
  content: string;
  authorId: number;
  tags: string[];
}) {
  return prisma.post.create({
    data: {
      title: data.title,
      content: data.content,
      authorId: data.authorId,
      tags: {
        connectOrCreate: data.tags.map((name) => ({
          where: { name },
          create: { name },
        })),
      },
    },
    include: { tags: true, author: true },
  });
}

// 3. Publish a post — return null if not found
export async function publishPost(id: number) {
  try {
    return await prisma.post.update({
      where: { id },
      data: { published: true },
    });
  } catch (e) {
    if (e instanceof Prisma.PrismaClientKnownRequestError && e.code === 'P2025') {
      return null; // Record not found
    }
    throw e;
  }
}

// 4. Get post counts grouped by author
export async function getPostCountByAuthor() {
  return prisma.post.groupBy({
    by: ['authorId'],
    _count: { id: true },
    orderBy: { _count: { id: 'desc' } },
  });
}

Quick Reference

# Setup
npm install prisma --save-dev && npm install @prisma/client
npx prisma init --datasource-provider postgresql

# Development cycle
npx prisma migrate dev --name <description>  # create + apply migration
npx prisma generate                          # regenerate client after schema change
npx prisma db pull                           # introspect existing DB into schema
npx prisma db push                           # push schema to DB without migration (prototyping)
npx prisma migrate reset                     # drop DB, re-apply all migrations, seed
npx prisma migrate status                    # list applied/pending migrations
npx prisma studio                            # open browser-based data editor

# Production
npx prisma migrate deploy                    # apply pending migrations (no prompts)
npx prisma db seed                           # run seed script
// Common query patterns
import { PrismaClient, Prisma } from '@prisma/client';
const prisma = new PrismaClient();

// Instantiation (singleton pattern for production)
// lib/prisma.ts
const globalForPrisma = global as unknown as { prisma: PrismaClient };
export const prisma = globalForPrisma.prisma ?? new PrismaClient();
if (process.env.NODE_ENV !== 'production') globalForPrisma.prisma = prisma;

// CRUD
await prisma.model.findUnique({ where: { id } })
await prisma.model.findUniqueOrThrow({ where: { id } })
await prisma.model.findFirst({ where: { ... } })
await prisma.model.findMany({ where, orderBy, skip, take, include, select })
await prisma.model.create({ data: { ... }, include: { ... } })
await prisma.model.update({ where: { id }, data: { ... } })
await prisma.model.upsert({ where, create, update })
await prisma.model.delete({ where: { id } })
await prisma.model.deleteMany({ where: { ... } })
await prisma.model.count({ where: { ... } })
await prisma.model.aggregate({ where, _sum, _avg, _min, _max, _count })
await prisma.model.groupBy({ by: ['field'], _count: true, orderBy })

// Transactions
await prisma.$transaction(async (tx) => { await tx.model.create(...) })
await prisma.$transaction([op1, op2, op3])

// Raw SQL
await prisma.$queryRaw<T[]>`SELECT * FROM table WHERE id = ${id}`
await prisma.$executeRaw`UPDATE table SET col = ${val} WHERE id = ${id}`

// Error handling
import { Prisma } from '@prisma/client';
if (e instanceof Prisma.PrismaClientKnownRequestError) {
  e.code // 'P2002' = unique constraint, 'P2025' = record not found
}

// Common error codes
// P2002 — Unique constraint failed
// P2003 — Foreign key constraint failed
// P2025 — Record to update/delete not found

Further Reading

Drizzle ORM: The Lightweight Alternative

For .NET engineers who know: Dapper, ADO.NET, and the tradeoff between ORM abstraction and SQL control You’ll learn: How Drizzle sits closer to SQL than Prisma — you write TypeScript that looks like SQL — and when to choose it over Prisma Time: 15-20 min read


The .NET Way (What You Already Know)

You have used both Dapper and EF Core, and you know why each exists. Dapper is thin: you write SQL, pass parameters, and get back typed objects. EF Core is thick: it manages your schema, generates SQL, tracks changes, and gives you LINQ. The tradeoff is control vs. productivity.

Dapper is the right tool when:

  • Your queries are complex enough that EF Core’s generated SQL is wrong or slow
  • You are working with an existing database you do not own
  • You need stored procedures, table-valued parameters, or multi-result-set queries
  • Performance is critical and you want to see exactly what SQL is running

EF Core is the right tool when:

  • You are building a new schema and want migrations managed for you
  • Your queries are straightforward CRUD with relations
  • You want to move fast without writing SQL for every operation

Drizzle occupies the same space as Dapper in this tradeoff, but it is not a thin ADO.NET wrapper. It is a full query builder that:

  • Lets you define schemas in TypeScript (no separate schema file)
  • Generates SQL that looks exactly like the SQL you would write by hand
  • Infers TypeScript types from your schema definitions
  • Has a migration tool (drizzle-kit) that generates SQL migration files

Think of it as Dapper with typed schema definitions and a query builder API.


The Drizzle Way

Installation

npm install drizzle-orm
npm install drizzle-kit --save-dev

# PostgreSQL driver
npm install postgres
# or
npm install pg @types/pg

Drizzle supports multiple drivers: postgres (postgres.js), pg (node-postgres), @neondatabase/serverless (Neon’s HTTP driver for serverless), @planetscale/database (PlanetScale MySQL), better-sqlite3 (SQLite), and others. You pick the driver; Drizzle wraps it.

Defining Schemas in TypeScript

This is Drizzle’s most distinctive feature compared to Prisma. There is no separate .prisma file. Your schema is TypeScript code in .ts files — typically in a src/db/schema.ts file or split across domain files.

// src/db/schema.ts
import {
  pgTable,
  serial,
  text,
  varchar,
  numeric,
  boolean,
  timestamp,
  integer,
  uuid,
  index,
  uniqueIndex,
} from 'drizzle-orm/pg-core';
import { relations } from 'drizzle-orm';

export const customers = pgTable(
  'customers',
  {
    id: serial('id').primaryKey(),
    externalRef: uuid('external_ref').defaultRandom().notNull(),
    fullName: text('full_name').notNull(),
    email: varchar('email', { length: 320 }).notNull(),
    isActive: boolean('is_active').notNull().default(true),
    createdAt: timestamp('created_at', { withTimezone: true })
      .notNull()
      .defaultNow(),
  },
  (table) => ({
    emailIdx: uniqueIndex('uq_customers_email').on(table.email),
  })
);

export const orders = pgTable(
  'orders',
  {
    id: serial('id').primaryKey(),
    customerId: integer('customer_id')
      .notNull()
      .references(() => customers.id),
    total: numeric('total', { precision: 12, scale: 2 }).notNull(),
    status: text('status').notNull().default('pending'),
    createdAt: timestamp('created_at', { withTimezone: true })
      .notNull()
      .defaultNow(),
  },
  (table) => ({
    customerIdx: index('idx_orders_customer_id').on(table.customerId),
  })
);

export const orderItems = pgTable('order_items', {
  id: serial('id').primaryKey(),
  orderId: integer('order_id')
    .notNull()
    .references(() => orders.id),
  productId: integer('product_id')
    .notNull()
    .references(() => products.id),
  quantity: integer('quantity').notNull(),
  price: numeric('price', { precision: 10, scale: 2 }).notNull(),
});

export const products = pgTable('products', {
  id: serial('id').primaryKey(),
  name: text('name').notNull(),
  sku: varchar('sku', { length: 50 }).notNull().unique(),
  price: numeric('price', { precision: 10, scale: 2 }).notNull(),
});

// Relations — for the query builder's join inference
export const customersRelations = relations(customers, ({ many }) => ({
  orders: many(orders),
}));

export const ordersRelations = relations(orders, ({ one, many }) => ({
  customer: one(customers, {
    fields: [orders.customerId],
    references: [customers.id],
  }),
  items: many(orderItems),
}));

export const orderItemsRelations = relations(orderItems, ({ one }) => ({
  order: one(orders, {
    fields: [orderItems.orderId],
    references: [orders.id],
  }),
  product: one(products, {
    fields: [orderItems.productId],
    references: [products.id],
  }),
}));

Drizzle column types for PostgreSQL:

SQL TypeDrizzle Function
SERIAL / INTEGER auto-incrementserial('col')
BIGSERIALbigserial('col', { mode: 'bigint' })
INTEGERinteger('col')
BIGINTbigint('col', { mode: 'number' | 'bigint' })
TEXTtext('col')
VARCHAR(n)varchar('col', { length: n })
BOOLEANboolean('col')
NUMERIC(p,s)numeric('col', { precision: p, scale: s })
REAL / FLOAT4real('col')
DOUBLE PRECISIONdoublePrecision('col')
UUIDuuid('col').defaultRandom()
TIMESTAMP WITH TIME ZONEtimestamp('col', { withTimezone: true })
TIMESTAMPtimestamp('col')
DATEdate('col')
JSONBjsonb('col')
JSONjson('col')
BYTEAcustomType or raw SQL

Setting Up the Client

// src/db/index.ts
import { drizzle } from 'drizzle-orm/postgres-js';
import postgres from 'postgres';
import * as schema from './schema';

const connectionString = process.env.DATABASE_URL!;

// For queries (with connection pool)
const queryClient = postgres(connectionString);
export const db = drizzle(queryClient, { schema });

// For migrations only (single connection)
export const migrationClient = postgres(connectionString, { max: 1 });

The schema object passed to drizzle() enables relational queries (the with API described below). Without it, you can still use the query builder but not the relational API.

The Query Builder API: SQL in TypeScript

Drizzle’s query API is designed to mirror SQL syntax. If you know SQL, the query builder feels immediately familiar — more so than Prisma’s nested object style.

import { db } from './db';
import { orders, customers, orderItems } from './db/schema';
import { eq, gt, and, desc, sql, count, sum } from 'drizzle-orm';

// SELECT * FROM orders WHERE id = 1
const order = await db.select().from(orders).where(eq(orders.id, 1));

// SELECT with specific columns
const result = await db
  .select({
    id: orders.id,
    total: orders.total,
    customerName: customers.fullName,
  })
  .from(orders)
  .innerJoin(customers, eq(orders.customerId, customers.id))
  .where(and(gt(orders.total, '100'), eq(orders.status, 'active')))
  .orderBy(desc(orders.createdAt))
  .limit(20)
  .offset(40);

// INSERT
const [newOrder] = await db
  .insert(orders)
  .values({
    customerId: 1,
    total: '249.99',
    status: 'pending',
  })
  .returning(); // returns the inserted row — like EF Core's SaveChanges + re-read

// INSERT multiple rows
const inserted = await db
  .insert(products)
  .values([
    { name: 'Widget A', sku: 'WGT-001', price: '9.99' },
    { name: 'Widget B', sku: 'WGT-002', price: '19.99' },
  ])
  .returning({ id: products.id, sku: products.sku });

// UPDATE
const [updated] = await db
  .update(orders)
  .set({ status: 'shipped' })
  .where(eq(orders.id, 1))
  .returning();

// DELETE
const deleted = await db
  .delete(orders)
  .where(eq(orders.id, 1))
  .returning();

// Aggregates
const [stats] = await db
  .select({
    orderCount: count(orders.id),
    totalRevenue: sum(orders.total),
  })
  .from(orders)
  .where(eq(orders.status, 'completed'));

The operators (eq, gt, and, desc, etc.) are imported from drizzle-orm and compose cleanly. This is much closer to writing SQL by hand than Prisma’s object-based API.

Joins

Drizzle handles joins explicitly, unlike Prisma’s include. You write the join yourself and select the columns you want:

import { eq, and } from 'drizzle-orm';

// INNER JOIN
const ordersWithCustomers = await db
  .select({
    orderId: orders.id,
    total: orders.total,
    customerName: customers.fullName,
    customerEmail: customers.email,
  })
  .from(orders)
  .innerJoin(customers, eq(orders.customerId, customers.id))
  .where(eq(orders.status, 'active'));

// LEFT JOIN (Dapper equivalent: you handle the null yourself)
const customersWithOrders = await db
  .select({
    customerId: customers.id,
    customerName: customers.fullName,
    orderId: orders.id,      // will be null for customers with no orders
    orderTotal: orders.total,
  })
  .from(customers)
  .leftJoin(orders, eq(customers.id, orders.customerId));

// Multiple joins
const orderDetails = await db
  .select({
    orderId: orders.id,
    total: orders.total,
    customerName: customers.fullName,
    productName: products.name,
    quantity: orderItems.quantity,
    itemPrice: orderItems.price,
  })
  .from(orders)
  .innerJoin(customers, eq(orders.customerId, customers.id))
  .innerJoin(orderItems, eq(orderItems.orderId, orders.id))
  .innerJoin(products, eq(orderItems.productId, products.id))
  .where(eq(orders.id, 42));

Relational Query API (the with API)

Writing joins manually is powerful but verbose for simple cases. Drizzle also provides a higher-level relational API that looks more like Prisma when you pass schema to the drizzle() constructor:

// Uses the relations you defined in schema.ts
const orderWithRelations = await db.query.orders.findFirst({
  where: eq(orders.id, 42),
  with: {
    customer: true,
    items: {
      with: {
        product: true,
      },
    },
  },
});

// Result is fully typed — TypeScript knows the shape
// orderWithRelations.customer.fullName
// orderWithRelations.items[0].product.name

The with API is similar to Prisma’s include. Use the join-based query builder when you need precise column selection, aggregation, or complex conditions. Use the with API when you want to load an entity graph without writing joins manually.

Subqueries

Drizzle supports subqueries natively — something Prisma cannot express without raw SQL:

import { sql, inArray } from 'drizzle-orm';

// Subquery in WHERE
const topCustomerIds = db
  .select({ id: customers.id })
  .from(customers)
  .where(eq(customers.isActive, true))
  .limit(10)
  .as('top_customers'); // alias the subquery

const topCustomerOrders = await db
  .select()
  .from(orders)
  .where(inArray(orders.customerId, db.select({ id: topCustomerIds.id }).from(topCustomerIds)));

// Scalar subquery in SELECT
const ordersWithCounts = await db
  .select({
    id: orders.id,
    total: orders.total,
    itemCount: sql<number>`(
      SELECT COUNT(*) FROM ${orderItems} WHERE ${orderItems.orderId} = ${orders.id}
    )`.as('item_count'),
  })
  .from(orders);

CTEs (Common Table Expressions)

import { sql, with as withCte } from 'drizzle-orm';

// Using $with for CTEs
const highValueCustomers = db.$with('high_value_customers').as(
  db
    .select({ customerId: orders.customerId, total: sum(orders.total).as('lifetime_value') })
    .from(orders)
    .groupBy(orders.customerId)
    .having(sql`sum(${orders.total}) > 1000`)
);

const result = await db
  .with(highValueCustomers)
  .select({
    customerName: customers.fullName,
    email: customers.email,
    lifetimeValue: highValueCustomers.total,
  })
  .from(highValueCustomers)
  .innerJoin(customers, eq(customers.id, highValueCustomers.customerId));

Transactions

// Drizzle transaction — wraps a callback, auto-rollback on throw
const result = await db.transaction(async (tx) => {
  const [order] = await tx
    .insert(orders)
    .values({ customerId: 1, total: '99.99', status: 'pending' })
    .returning();

  await tx.insert(orderItems).values({
    orderId: order.id,
    productId: 5,
    quantity: 2,
    price: '49.99',
  });

  // If anything throws here, both inserts are rolled back
  return order;
});

// Savepoints (nested transactions)
await db.transaction(async (tx) => {
  await tx.insert(orders).values({ customerId: 1, total: '50.00', status: 'pending' });

  try {
    await tx.transaction(async (nested) => {
      // This inner transaction can be rolled back independently
      await nested.insert(orderItems).values({ orderId: 999, productId: 5, quantity: 1, price: '50.00' });
    });
  } catch {
    // Inner transaction failed (FK violation, etc.) — outer continues
    console.warn('Item insert failed, continuing without it');
  }
});

Raw SQL

Drizzle provides sql tagged template literal for inline SQL expressions and for fully raw queries:

import { sql } from 'drizzle-orm';

// Inline SQL expression (used within query builder)
const result = await db
  .select({
    id: orders.id,
    formattedTotal: sql<string>`'$' || ${orders.total}::text`,
  })
  .from(orders);

// Fully raw query — typed
const rawResult = await db.execute<{ id: number; total: string }>(
  sql`SELECT id, total FROM orders WHERE created_at > NOW() - INTERVAL '30 days'`
);

// Raw with parameters (safe — parameterized)
const customerId = 42;
const rawOrders = await db.execute<{ id: number; total: string }>(
  sql`SELECT id, total FROM orders WHERE customer_id = ${customerId}`
);

Migrations with drizzle-kit

# drizzle.config.ts — configuration file
// drizzle.config.ts
import type { Config } from 'drizzle-kit';

export default {
  schema: './src/db/schema.ts',
  out: './drizzle',              // where migration files are written
  dialect: 'postgresql',
  dbCredentials: {
    url: process.env.DATABASE_URL!,
  },
} satisfies Config;
# Generate migration SQL from schema changes
npx drizzle-kit generate

# Apply migrations to the database
npx drizzle-kit migrate

# Open Drizzle Studio (browser UI for your data)
npx drizzle-kit studio

# Push schema directly to DB without migration files (prototyping only)
npx drizzle-kit push

# Inspect existing database and generate schema
npx drizzle-kit introspect

Unlike Prisma, drizzle-kit generate only generates the SQL file. It does not apply it. You apply with drizzle-kit migrate or by running the SQL yourself. This gives you more control but requires more steps.

The generated migration file is plain SQL:

-- drizzle/0000_initial.sql
CREATE TABLE IF NOT EXISTS "customers" (
  "id" serial PRIMARY KEY NOT NULL,
  "external_ref" uuid DEFAULT gen_random_uuid() NOT NULL,
  "full_name" text NOT NULL,
  "email" varchar(320) NOT NULL,
  "is_active" boolean DEFAULT true NOT NULL,
  "created_at" timestamp with time zone DEFAULT now() NOT NULL
);
CREATE UNIQUE INDEX IF NOT EXISTS "uq_customers_email" ON "customers" ("email");

You can read, edit, and version-control these files directly. This is both a feature and a responsibility: Drizzle trusts you to know what the SQL does.


Drizzle vs Prisma: Side-by-Side

Same operations expressed in both ORMs. Use this to calibrate which API you prefer for a given task.

Simple find:

// Prisma
const order = await prisma.order.findUnique({ where: { id: 42 } });

// Drizzle
const [order] = await db.select().from(orders).where(eq(orders.id, 42));

Filter and sort:

// Prisma
const results = await prisma.order.findMany({
  where: { status: 'active', total: { gt: 100 } },
  orderBy: { createdAt: 'desc' },
  take: 20,
});

// Drizzle
const results = await db
  .select()
  .from(orders)
  .where(and(eq(orders.status, 'active'), gt(orders.total, '100')))
  .orderBy(desc(orders.createdAt))
  .limit(20);

Join with selected columns:

// Prisma — always returns full models; use select to trim
const results = await prisma.order.findMany({
  select: {
    id: true,
    total: true,
    customer: { select: { fullName: true } },
  },
});

// Drizzle — explicit join, explicit column selection
const results = await db
  .select({
    id: orders.id,
    total: orders.total,
    customerName: customers.fullName,
  })
  .from(orders)
  .innerJoin(customers, eq(orders.customerId, customers.id));

Create:

// Prisma
const order = await prisma.order.create({
  data: { customerId: 1, total: 249.99, status: 'pending' },
});

// Drizzle
const [order] = await db
  .insert(orders)
  .values({ customerId: 1, total: '249.99', status: 'pending' })
  .returning();

Upsert:

// Prisma
const customer = await prisma.customer.upsert({
  where: { email },
  create: { email, fullName: name },
  update: { fullName: name },
});

// Drizzle
import { onConflictDoUpdate } from 'drizzle-orm/pg-core';
const [customer] = await db
  .insert(customers)
  .values({ email, fullName: name })
  .onConflictDoUpdate({
    target: customers.email,
    set: { fullName: name },
  })
  .returning();

Aggregate:

// Prisma
const result = await prisma.order.aggregate({
  _sum: { total: true },
  _count: { id: true },
  where: { status: 'completed' },
});

// Drizzle
const [result] = await db
  .select({
    totalRevenue: sum(orders.total),
    orderCount: count(orders.id),
  })
  .from(orders)
  .where(eq(orders.status, 'completed'));

Raw SQL:

// Prisma
const result = await prisma.$queryRaw<{ id: number }[]>`
  SELECT id FROM orders WHERE created_at > NOW() - INTERVAL '7 days'
`;

// Drizzle
const result = await db.execute<{ id: number }>(
  sql`SELECT id FROM orders WHERE created_at > NOW() - INTERVAL '7 days'`
);

Transactions:

// Prisma
await prisma.$transaction(async (tx) => {
  await tx.order.create({ data: { ... } });
  await tx.inventory.update({ ... });
});

// Drizzle
await db.transaction(async (tx) => {
  await tx.insert(orders).values({ ... });
  await tx.update(inventory).set({ ... }).where(eq(inventory.productId, id));
});

When to Choose Drizzle vs Prisma

SituationRecommendation
Rapid prototyping, simple CRUD-heavy APIPrisma — less boilerplate, faster to get running
Complex reports with multi-table joins and aggregatesDrizzle — SQL-like API is easier to reason about
Serverless / edge (Cloudflare Workers, Vercel Edge)Drizzle — no Rust binary, works with HTTP database drivers
Team unfamiliar with SQLPrisma — relation graph is more approachable
Performance-critical paths requiring precise SQL controlDrizzle — you see exactly what SQL runs
Existing complex database schemaEither, but Drizzle’s introspect is more transparent
You want automatic migration generation from schema changesPrisma — generates declarative SQL from schema diff
You want to own and review migration SQL files explicitlyDrizzle — generates SQL files you edit directly
Many-to-many with extra attributes on the join tableDrizzle — easier to model and query the join table directly
Subqueries and CTEs without raw SQLDrizzle — built-in support; Prisma requires $queryRaw

Both ORMs can handle both categories — this is a spectrum, not a binary. Large projects often use Drizzle for reporting/analytics queries and Prisma for application CRUD, or vice versa. Many projects pick one and use raw SQL for the few cases the ORM cannot handle cleanly.


Key Differences

ConceptDrizzlePrisma
Schema definitionTypeScript in .ts filesSeparate .prisma file
Client generationNo generation step — types come from schema TSprisma generate required after schema change
Migration generationdrizzle-kit generate → SQL fileprisma migrate dev → SQL file + applies it
Migration applicationSeparate drizzle-kit migrate stepApplied during prisma migrate dev automatically
Query styleSQL-like (operators, explicit joins)Object-based (nested where, include)
Relation loadingExplicit joins or with APIinclude
SubqueriesBuilt-inRequires $queryRaw
CTEsBuilt-in ($with)Requires $queryRaw
Runtime binaryNone — pure TypeScript/JSRust binary (query engine)
Serverless compatibilityExcellentNeeds extra setup (Accelerate/PgBouncer)
Bundle sizeSmallerLarger (includes Rust binary)
TypeScript inferenceSchema is the source of truth; types auto-inferredTypes generated by prisma generate
Upsert.onConflictDoUpdate()Built-in upsert()
Change trackingNoneNone

Gotchas for .NET Engineers

1. db.select() returns an array, not a single object

Drizzle’s select() always returns an array, even when you expect one record. This mirrors SQL’s behavior (a SELECT returns a result set). It will not throw if no rows are found — it returns an empty array.

// Returns Order[] — possibly empty
const result = await db.select().from(orders).where(eq(orders.id, 999));

// Wrong — result is always an array
const order = await db.select().from(orders).where(eq(orders.id, 1));
order.id; // TypeScript error: 'id' does not exist on type 'Order[]'

// Correct — destructure or index
const [order] = await db.select().from(orders).where(eq(orders.id, 1));
if (!order) {
  throw new Error('Order not found');
}

// Or use findFirst from the relational API (returns T | undefined)
const order2 = await db.query.orders.findFirst({
  where: eq(orders.id, 1),
});

This is a consistent departure from EF Core’s FindAsync() / FirstOrDefaultAsync() pattern. You must handle the “not found” case explicitly.

2. Schema changes are not automatically detected — you must run drizzle-kit generate

Unlike Prisma, where editing schema.prisma and running prisma migrate dev creates and applies the migration in one step, Drizzle separates schema definition (TypeScript) from migration generation. Forgetting to run drizzle-kit generate after a schema change means your database and TypeScript types are out of sync, but TypeScript will not tell you — the types reflect your TypeScript code, which you just changed.

# Correct workflow after any schema.ts change:
# 1. Edit src/db/schema.ts
# 2. Generate migration SQL
npx drizzle-kit generate
# 3. Review the generated SQL in drizzle/ directory
# 4. Apply it
npx drizzle-kit migrate
# 5. No separate "generate client" step — types update immediately

3. Numeric/Decimal values come back as strings

PostgreSQL’s NUMERIC/DECIMAL type is returned as a string by the underlying pg and postgres.js drivers to preserve precision (JavaScript number cannot represent large decimals exactly). Drizzle passes this through — it does not wrap in a Decimal object like Prisma does.

// Schema definition
const orders = pgTable('orders', {
  total: numeric('total', { precision: 12, scale: 2 }).notNull(),
});

// Query result
const [order] = await db.select().from(orders).where(eq(orders.id, 1));
console.log(typeof order.total); // 'string'
console.log(order.total);        // '249.99'

// You must parse it explicitly
const total = parseFloat(order.total); // loses precision for very large values
const total2 = new Decimal(order.total); // use decimal.js for financial math

// Type annotation is string — TypeScript is correct
const total3: string = order.total; // no error

Add a .$type<Decimal>() modifier if you want TypeScript to know the runtime type is a Decimal (you still need to parse it yourself):

import Decimal from 'decimal.js';

const orders = pgTable('orders', {
  total: numeric('total', { precision: 12, scale: 2 }).$type<Decimal>().notNull(),
});

4. Relations in Drizzle require the schema parameter at init time

The relational query API (db.query.orders.findFirst({ with: ... })) only works if you pass your schema to drizzle(). Forgetting this gives a runtime error, not a compile-time error.

// Wrong — db.query will be empty/undefined
const db = drizzle(client);

// Correct — pass schema
import * as schema from './schema';
const db = drizzle(client, { schema });

// Now this works
await db.query.orders.findFirst({ with: { customer: true } });

5. onConflictDoNothing vs onConflictDoUpdate — know the difference

// Silently skip if the unique constraint fires (no error, no update)
await db
  .insert(customers)
  .values({ email: 'user@example.com', fullName: 'Test User' })
  .onConflictDoNothing();

// Upsert — update specific fields on conflict
await db
  .insert(customers)
  .values({ email: 'user@example.com', fullName: 'Updated Name' })
  .onConflictDoUpdate({
    target: customers.email,        // the conflicting unique column
    set: { fullName: 'Updated Name' }, // what to update
  });

// Common pattern: update everything except the key
await db
  .insert(customers)
  .values(customerData)
  .onConflictDoUpdate({
    target: customers.email,
    set: {
      fullName: sql`excluded.full_name`, // reference the attempted insert values
      isActive: sql`excluded.is_active`,
    },
  });

excluded is a PostgreSQL special reference to the row that was attempted in the INSERT. Use it to say “set the column to whatever we tried to insert.”


Hands-On Exercise

Build the same blog repository from article 5.2 using Drizzle, then compare the two implementations.

Schema:

// src/db/schema.ts
import { pgTable, serial, text, boolean, timestamp, integer, primaryKey } from 'drizzle-orm/pg-core';
import { relations } from 'drizzle-orm';

export const users = pgTable('users', {
  id: serial('id').primaryKey(),
  email: text('email').notNull().unique(),
  name: text('name').notNull(),
  createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(),
});

export const posts = pgTable('posts', {
  id: serial('id').primaryKey(),
  title: text('title').notNull(),
  content: text('content').notNull(),
  published: boolean('published').notNull().default(false),
  authorId: integer('author_id').notNull().references(() => users.id),
  createdAt: timestamp('created_at', { withTimezone: true }).notNull().defaultNow(),
  updatedAt: timestamp('updated_at', { withTimezone: true }).notNull().defaultNow(),
});

export const tags = pgTable('tags', {
  id: serial('id').primaryKey(),
  name: text('name').notNull().unique(),
});

// Explicit many-to-many join table
export const postTags = pgTable('post_tags', {
  postId: integer('post_id').notNull().references(() => posts.id),
  tagId: integer('tag_id').notNull().references(() => tags.id),
}, (table) => ({
  pk: primaryKey({ columns: [table.postId, table.tagId] }),
}));

export const usersRelations = relations(users, ({ many }) => ({
  posts: many(posts),
}));

export const postsRelations = relations(posts, ({ one, many }) => ({
  author: one(users, { fields: [posts.authorId], references: [users.id] }),
  postTags: many(postTags),
}));

export const tagsRelations = relations(tags, ({ many }) => ({
  postTags: many(postTags),
}));

export const postTagsRelations = relations(postTags, ({ one }) => ({
  post: one(posts, { fields: [postTags.postId], references: [posts.id] }),
  tag: one(tags, { fields: [postTags.tagId], references: [tags.id] }),
}));

Repository:

// src/blog-repository.ts
import { db } from './db';
import { posts, users, tags, postTags } from './db/schema';
import { eq, and, gte, desc, count, sql, inArray } from 'drizzle-orm';

// 1. Get published posts with author, paginated, optional tag filter
export async function getPublishedPosts(options: {
  page: number;
  pageSize: number;
  tag?: string;
}) {
  const { page, pageSize, tag } = options;

  // If filtering by tag, get matching post IDs first
  let postIds: number[] | undefined;
  if (tag) {
    const tagRows = await db
      .select({ postId: postTags.postId })
      .from(postTags)
      .innerJoin(tags, eq(postTags.tagId, tags.id))
      .where(eq(tags.name, tag));
    postIds = tagRows.map((r) => r.postId);
    if (postIds.length === 0) return [];
  }

  return db.query.posts.findMany({
    where: and(
      eq(posts.published, true),
      postIds ? inArray(posts.id, postIds) : undefined
    ),
    with: {
      author: { columns: { id: true, name: true } },
      postTags: { with: { tag: true } },
    },
    orderBy: desc(posts.createdAt),
    offset: page * pageSize,
    limit: pageSize,
  });
}

// 2. Create a post with tags
export async function createPost(data: {
  title: string;
  content: string;
  authorId: number;
  tags: string[];
}) {
  return db.transaction(async (tx) => {
    const [post] = await tx
      .insert(posts)
      .values({
        title: data.title,
        content: data.content,
        authorId: data.authorId,
      })
      .returning();

    if (data.tags.length > 0) {
      // Insert tags (ignore conflict on name)
      await tx
        .insert(tags)
        .values(data.tags.map((name) => ({ name })))
        .onConflictDoNothing();

      // Fetch tag IDs
      const tagRows = await tx
        .select({ id: tags.id })
        .from(tags)
        .where(inArray(tags.name, data.tags));

      // Link tags to post
      await tx.insert(postTags).values(
        tagRows.map((t) => ({ postId: post.id, tagId: t.id }))
      );
    }

    return post;
  });
}

// 3. Get post counts by author
export async function getPostCountByAuthor() {
  return db
    .select({
      authorId: posts.authorId,
      authorName: users.name,
      postCount: count(posts.id),
    })
    .from(posts)
    .innerJoin(users, eq(posts.authorId, users.id))
    .groupBy(posts.authorId, users.name)
    .orderBy(desc(count(posts.id)));
}

Quick Reference

# Setup
npm install drizzle-orm postgres
npm install drizzle-kit --save-dev

# Migration workflow
npx drizzle-kit generate          # generate SQL migration from schema changes
npx drizzle-kit migrate           # apply pending migrations
npx drizzle-kit push              # push schema to DB (no migration file, for prototyping)
npx drizzle-kit studio            # browser UI
npx drizzle-kit introspect        # generate schema from existing DB
// Setup
import { drizzle } from 'drizzle-orm/postgres-js';
import postgres from 'postgres';
import * as schema from './schema';
const db = drizzle(postgres(process.env.DATABASE_URL!), { schema });

// Column types
serial('col')                           // SERIAL (auto-increment)
integer('col')                          // INTEGER
text('col')                             // TEXT
varchar('col', { length: n })           // VARCHAR(n)
boolean('col')                          // BOOLEAN
numeric('col', { precision: p, scale: s }) // NUMERIC
uuid('col').defaultRandom()             // UUID
timestamp('col', { withTimezone: true }).defaultNow() // TIMESTAMPTZ
jsonb('col')                            // JSONB

// Query builder
db.select().from(table)
db.select({ id: t.id, name: t.name }).from(table)
db.select().from(t).where(eq(t.id, 1))
db.select().from(t).where(and(eq(t.a, 1), gt(t.b, 100)))
db.select().from(t).where(or(eq(t.a, 1), eq(t.a, 2)))
db.select().from(t).innerJoin(t2, eq(t.fk, t2.id))
db.select().from(t).leftJoin(t2, eq(t.fk, t2.id))
db.select().from(t).orderBy(desc(t.createdAt))
db.select().from(t).limit(20).offset(40)
db.insert(t).values({ ... }).returning()
db.insert(t).values({ ... }).onConflictDoUpdate({ target: t.col, set: { ... } })
db.insert(t).values({ ... }).onConflictDoNothing()
db.update(t).set({ ... }).where(eq(t.id, 1)).returning()
db.delete(t).where(eq(t.id, 1)).returning()

// Operators (import from 'drizzle-orm')
eq(col, val)    ne(col, val)    gt(col, val)    gte(col, val)
lt(col, val)    lte(col, val)   like(col, patt) ilike(col, patt)
and(...conds)   or(...conds)    not(cond)
isNull(col)     isNotNull(col)  inArray(col, arr)  notInArray(col, arr)
count(col)      sum(col)        avg(col)        min(col)   max(col)

// Relational API (requires schema passed to drizzle())
db.query.table.findFirst({ where: ..., with: { relation: true } })
db.query.table.findMany({ where, with, orderBy, limit, offset })

// Transactions
await db.transaction(async (tx) => { await tx.insert(...).values(...) })

// Raw SQL
sql`SELECT * FROM ${table} WHERE id = ${id}`
await db.execute(sql`...`)

Further Reading

Database Migrations: EF Core vs Prisma and Drizzle

For .NET engineers who know: Add-Migration, Update-Database, migration bundles, IMigrationsSqlGenerator, and the discipline of never editing applied migrations You’ll learn: How Prisma and Drizzle handle the migration lifecycle — what they automate, what they leave to you, and the specific production deployment patterns that replace your EF Core workflows Time: 15-20 min read


The .NET Way (What You Already Know)

EF Core migrations are code. When you run Add-Migration, EF Core compares the current model snapshot against your entity classes and generates a C# migration class with Up() and Down() methods. Both directions are SQL executed through IDbMigrator. You can customize Up() and Down() with raw SQL, call stored procedures, or add seed data directly in the migration.

// EF Core migration — a class with explicit Up() and Down()
public partial class AddOrderStatusColumn : Migration
{
    protected override void Up(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.AddColumn<string>(
            name: "status",
            table: "orders",
            type: "nvarchar(50)",
            nullable: false,
            defaultValue: "pending");

        // Custom SQL in migrations is first-class
        migrationBuilder.Sql(@"
            UPDATE orders
            SET status = CASE
                WHEN shipped_at IS NOT NULL THEN 'shipped'
                ELSE 'pending'
            END");
    }

    protected override void Down(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.DropColumn(name: "status", table: "orders");
    }
}

The workflow is:

  1. dotnet ef migrations add <Name> — generates the C# migration
  2. Review and optionally edit the migration file
  3. dotnet ef database update — applies to the target database
  4. In production CI/CD: dotnet ef database update or a migration bundle

EF Core tracks applied migrations in __EFMigrationsHistory. It will never re-apply a migration that is already recorded there.


Prisma Migrations

How Prisma Differs: Declarative, SQL-Only

Prisma’s migration approach is declarative. You describe what the schema should look like in schema.prisma, and Prisma figures out what SQL is needed to get there. The generated migration files are plain SQL — not C#, not JavaScript. There are no Up() and Down() methods.

The key difference: Prisma does not generate rollback SQL.

prisma/
  migrations/
    20260101000000_init/
      migration.sql          ← forward SQL only
    20260115000000_add_orders/
      migration.sql
  schema.prisma

Each migration directory contains one file: the SQL to apply. That is it. If you need to roll back, you write the rollback SQL yourself (or restore a database backup, which is the more common production strategy).

Development Workflow

# Step 1: Edit schema.prisma
# Step 2: Create migration and apply to dev DB
npx prisma migrate dev --name add_status_to_orders

# Step 3: Prisma generates and immediately applies the migration
# It also re-runs prisma generate to update the TypeScript client

prisma migrate dev does three things in order:

  1. Computes the diff between your current schema.prisma and the last migration
  2. Writes a new migration.sql file under prisma/migrations/<timestamp>_<name>/
  3. Applies the migration to your local development database
  4. Runs prisma generate to regenerate the TypeScript client

This is convenient but means you cannot preview the SQL before it runs in development. Use prisma migrate dev --create-only if you want to generate the file without applying it:

# Generate the SQL file but do not apply it yet
npx prisma migrate dev --name add_status_to_orders --create-only

# Review the file at prisma/migrations/20260115000000_add_status_to_orders/migration.sql
# Edit it if needed, then apply
npx prisma migrate dev

The Generated SQL

-- prisma/migrations/20260115000000_add_status_to_orders/migration.sql
-- Prisma generates clean, standard SQL

ALTER TABLE "orders" ADD COLUMN "status" TEXT NOT NULL DEFAULT 'pending';

UPDATE "orders"
SET "status" = CASE
    WHEN "shipped_at" IS NOT NULL THEN 'shipped'
    ELSE 'pending'
END;

Wait — that UPDATE statement is not generated automatically by Prisma. Prisma generates only the DDL (the ALTER TABLE). If you need data migration logic (backfilling data based on existing columns), you must add the SQL manually to the generated migration file before applying it.

This is the fundamental limitation: Prisma migrations are SQL-only, and only DDL is auto-generated. Data migrations and complex transformations require manual SQL additions.

-- What Prisma generates automatically:
ALTER TABLE "orders" ADD COLUMN "status" TEXT NOT NULL DEFAULT 'pending';

-- What you add manually for the data migration:
UPDATE "orders"
SET "status" = CASE
    WHEN "shipped_at" IS NOT NULL THEN 'shipped'
    ELSE 'pending'
END;

Prisma tracks applied migrations in _prisma_migrations (equivalent to __EFMigrationsHistory). It will not re-apply already-applied migrations.

Production Deployment

Never run prisma migrate dev in production. It is designed for development: it can reset the database and does interactive prompting. For production (and CI/CD):

# Apply pending migrations without interactive prompts
npx prisma migrate deploy

migrate deploy only applies pending migrations. It never generates new ones, never resets the database, never prompts. It is the command for your deployment pipeline.

# GitHub Actions — production deployment step
- name: Apply database migrations
  run: npx prisma migrate deploy
  env:
    DATABASE_URL: ${{ secrets.DATABASE_URL }}

- name: Deploy application
  run: npm run start

The sequence matters: run migrate deploy before starting the new application version. Doing it in the other order means your new code may run against the old schema for a window of time.

Seeding

Prisma’s seed script is separate from migrations. It is a TypeScript file you write and register in package.json:

// prisma/seed.ts
import { PrismaClient } from '@prisma/client';

const prisma = new PrismaClient();

async function main() {
  // Use upsert — makes the seed idempotent (safe to run multiple times)
  const roles = await Promise.all([
    prisma.role.upsert({
      where: { name: 'admin' },
      update: {},
      create: { name: 'admin', permissions: ['read', 'write', 'delete'] },
    }),
    prisma.role.upsert({
      where: { name: 'viewer' },
      update: {},
      create: { name: 'viewer', permissions: ['read'] },
    }),
  ]);

  console.log(`Seeded ${roles.length} roles`);
}

main()
  .catch((e) => { console.error(e); process.exit(1); })
  .finally(() => prisma.$disconnect());
// package.json
{
  "prisma": {
    "seed": "ts-node prisma/seed.ts"
  }
}
# Run seed manually
npx prisma db seed

# Seed also runs automatically after:
npx prisma migrate reset   # drops DB, re-applies all migrations, seeds

Seed data should be idempotent (using upsert, createMany with skipDuplicates, or connectOrCreate). This lets you run the seed safely in multiple environments without errors.

Squashing Migrations

After many iterations, you may have dozens of small migrations. You can squash them into one:

# Mark all migrations as applied without running them (e.g., for a fresh baseline)
npx prisma migrate resolve --applied 20260101000000_init

The proper squash workflow:

  1. Create a new migration with the desired combined SQL (by hand or by running db push on a fresh DB then diffing)
  2. Delete the old migration directories
  3. Mark the new baseline migration as applied on all existing databases with migrate resolve --applied
  4. New databases created from scratch will apply the squashed migration

This is more manual than EF Core’s squash approach, but the result is the same: fewer migration files to apply for a fresh database.

Handling Merge Conflicts in Prisma Migrations

When two branches both add migrations, you will have a conflict in the prisma/migrations directory. Prisma uses migration_lock.toml as a conflict detector:

# prisma/migrations/migration_lock.toml — auto-generated, check into git
# This file is used by Prisma to detect merge conflicts
provider = "postgresql"

When you merge branches with conflicting migrations, migration_lock.toml may conflict. After resolving the merge:

# Check current state
npx prisma migrate status

# If migrations are out of order or Prisma is confused, reset dev DB
npx prisma migrate reset

# Re-apply all migrations from scratch (dev only — destroys data)
# Prisma will detect the correct order from directory timestamps

For production, never reset. Order matters: both migrations apply in timestamp order. If branch A adds a users table and branch B adds a posts table with a users FK, and B’s timestamp is earlier, deploying in order will fail. Structure your team’s migration workflow to avoid this:

  • Long-lived feature branches should rebase before merging to pull in any new migrations
  • Use a migration gating step in CI that verifies prisma migrate status is clean before merging

Drizzle Migrations

How Drizzle Differs: Explicit SQL Files

Drizzle takes a different philosophy. drizzle-kit generate inspects your TypeScript schema definitions, diffs them against a snapshot of the previous state (stored in a meta/ directory alongside your migrations), and generates a SQL file. It does not apply the migration. Application is a separate explicit step.

drizzle/
  0000_initial.sql
  0001_add_orders.sql
  0002_add_status_column.sql
  meta/
    _journal.json        ← tracks which migrations exist and their order
    0000_snapshot.json   ← schema state after each migration
    0001_snapshot.json
    0002_snapshot.json

Development Workflow

# Step 1: Edit src/db/schema.ts

# Step 2: Generate migration SQL (does NOT apply it)
npx drizzle-kit generate

# Generated: drizzle/0003_add_status_column.sql
# Review the file — edit if needed

# Step 3: Apply the migration
npx drizzle-kit migrate

The generated SQL:

-- drizzle/0003_add_status_column.sql
ALTER TABLE "orders" ADD COLUMN "status" text DEFAULT 'pending' NOT NULL;

As with Prisma, data migration SQL must be added manually. Drizzle generates DDL only.

You can also apply migrations programmatically from application code — useful for integration tests or docker-compose startup:

// src/db/migrate.ts
import { drizzle } from 'drizzle-orm/postgres-js';
import { migrate } from 'drizzle-orm/postgres-js/migrator';
import postgres from 'postgres';

async function runMigrations() {
  const migrationClient = postgres(process.env.DATABASE_URL!, { max: 1 });
  const db = drizzle(migrationClient);

  await migrate(db, { migrationsFolder: './drizzle' });
  await migrationClient.end();
}

runMigrations().catch(console.error);

This is a common pattern for serverless deployments or Docker-based test environments where you want migrations to run automatically on startup.

Production Deployment with Drizzle

# Option 1: CLI
npx drizzle-kit migrate

# Option 2: Programmatic (run as a startup script)
node dist/db/migrate.js
# GitHub Actions
- name: Run database migrations
  run: npx drizzle-kit migrate
  env:
    DATABASE_URL: ${{ secrets.DATABASE_URL }}

Or, more commonly with Drizzle, run migrations as part of application startup using the programmatic API — the application migrates itself on boot, then starts serving requests. This is the Kubernetes-friendly approach.

Drizzle Seeding

Drizzle has no built-in seed command. You write a seed script and run it however you like:

// src/db/seed.ts
import { db } from './index';
import { roles, users } from './schema';

async function seed() {
  await db
    .insert(roles)
    .values([
      { name: 'admin' },
      { name: 'viewer' },
    ])
    .onConflictDoNothing(); // idempotent

  console.log('Seeded roles');
}

seed()
  .catch(console.error)
  .finally(() => process.exit());
// package.json
{
  "scripts": {
    "db:seed": "ts-node src/db/seed.ts",
    "db:migrate": "drizzle-kit migrate",
    "db:generate": "drizzle-kit generate"
  }
}

Handling Merge Conflicts in Drizzle

Drizzle uses the meta/_journal.json file and numbered SQL files. If two branches both run drizzle-kit generate, you get a conflict in _journal.json and possibly conflicting migration numbers.

The correct resolution:

  1. Decide which migration should be applied first
  2. Renumber if needed (Drizzle uses the journal for order, not file names)
  3. Update _journal.json to reflect the correct order
  4. Verify by running drizzle-kit migrate --dry-run if supported, or inspect the journal

In practice, teams enforce “squash before merge” or “rebase before generating” to avoid this. One person generates migrations per feature branch; they do not generate independently on multiple branches touching the same tables.

Squashing Migrations in Drizzle

# Push the current schema state directly (no migration file, for dev baseline)
npx drizzle-kit push

# For squashing in production:
# 1. Combine the SQL from your old migration files into one file manually
# 2. Update meta/_journal.json to reference only the new file
# 3. Mark it as applied on existing databases

Drizzle does not have a built-in squash command. The approach is manual: consolidate the SQL files and update the journal. This is more work than Prisma’s migrate resolve, but gives you full control over the resulting SQL.


Side-by-Side Migration Workflow Comparison

StepEF CorePrismaDrizzle
Schema sourceC# entity classesschema.prismaTypeScript schema files
Create migrationdotnet ef migrations add <Name>prisma migrate dev --name <name>drizzle-kit generate
Apply to devdotnet ef database update(automatic with migrate dev)drizzle-kit migrate
Apply to productiondotnet ef database update or bundleprisma migrate deploydrizzle-kit migrate or programmatic
Preview SQLView generated .cs + migration SQL--create-only flag, then reviewReview generated .sql file
Auto-generate rollbackYes — Down() methodNoNo
Data migration supportYes — custom C# in Up()/Down()Manual SQL added to migration fileManual SQL added to migration file
Seed commandHasData() in OnModelCreating (tied to migrations) or external scriptprisma db seedExternal script (ts-node seed.ts)
Reset dev DBdotnet ef database drop && dotnet ef database updateprisma migrate resetDrop manually + drizzle-kit migrate
Migration history table__EFMigrationsHistory_prisma_migrations__drizzle_migrations
Squash/baselineCustom — edit snapshot + MigrationAttributeprisma migrate resolve --appliedManual journal edit
Introspect existing DBScaffold-DbContextprisma db pulldrizzle-kit introspect

Rollback Strategies

This is the biggest conceptual gap between EF Core and both TypeScript ORMs. EF Core generates Down() methods. Neither Prisma nor Drizzle generate rollback SQL automatically.

Your rollback options in TypeScript ORM land:

Option 1: Database restore (safest for production)

Keep automated backups (Render, AWS RDS, and Supabase all do this by default). If a migration causes a production incident, restore to the pre-migration snapshot. This is simpler than you might expect for many failure modes.

Option 2: Write rollback SQL manually

Write a new forward migration that undoes the previous one. This is the recommended approach for non-destructive changes:

-- Migration 0005: added a column that turned out to be wrong
-- drizzle/0005_add_bad_column.sql
ALTER TABLE "orders" ADD COLUMN "bad_column" text;

-- Rollback by writing migration 0006
-- drizzle/0006_remove_bad_column.sql
ALTER TABLE "orders" DROP COLUMN "bad_column";

Option 3: Transactional DDL (PostgreSQL-specific)

PostgreSQL supports DDL inside transactions. Drizzle and Prisma both run migrations in transactions by default. If your migration fails partway through, PostgreSQL rolls back the entire migration — you never end up with a partially applied state.

-- This runs inside a transaction; if the UPDATE fails, the ALTER TABLE rolls back too
ALTER TABLE "orders" ADD COLUMN "status" TEXT NOT NULL DEFAULT 'pending';

UPDATE "orders"
SET "status" = CASE
    WHEN "shipped_at" IS NOT NULL THEN 'shipped'
    ELSE 'pending'
END;

This is a significant advantage over SQL Server for migration safety. The exception: operations like CREATE INDEX CONCURRENTLY cannot run inside a transaction and must be handled separately.

Option 4: Blue-green and expand-contract pattern

For zero-downtime deployments, use the expand-contract pattern:

Phase 1 (expand):   Add new column as nullable — deploy this migration, keep old code running
Phase 2 (migrate):  Backfill data in the new column — deploy code that writes to both columns
Phase 3 (contract): Make the column NOT NULL, drop the old column — deploy once backfill is complete

This avoids locking the table and allows rollback at each phase.


CI/CD Integration

GitHub Actions: Prisma

# .github/workflows/deploy.yml
name: Deploy

on:
  push:
    branches: [main]

jobs:
  migrate-and-deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Generate Prisma Client
        run: npx prisma generate

      - name: Run migrations
        run: npx prisma migrate deploy
        env:
          DATABASE_URL: ${{ secrets.DATABASE_URL }}

      - name: Deploy application
        run: |
          # your deployment command here
          npm run build && npm run start
        env:
          DATABASE_URL: ${{ secrets.DATABASE_URL }}

GitHub Actions: Drizzle

name: Deploy

on:
  push:
    branches: [main]

jobs:
  migrate-and-deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Run migrations
        run: npx drizzle-kit migrate
        env:
          DATABASE_URL: ${{ secrets.DATABASE_URL }}

      - name: Deploy application
        run: npm run build && npm run start
        env:
          DATABASE_URL: ${{ secrets.DATABASE_URL }}

Verifying Migration State in CI

Add a migration status check to your PR validation pipeline to catch drift early:

# PR validation job — does not deploy, just checks
validate-migrations:
  runs-on: ubuntu-latest
  steps:
    - uses: actions/checkout@v4
    - run: npm ci

    # Prisma: check that no migrations are pending
    - name: Check migration status
      run: |
        STATUS=$(npx prisma migrate status --json 2>/dev/null || echo '{"hasPendingMigrations": true}')
        echo $STATUS
      env:
        DATABASE_URL: ${{ secrets.STAGING_DATABASE_URL }}

    # Drizzle: alternative — check that drizzle-kit generate produces no new files
    # (requires running against a staging DB that is up to date)

Gotchas for .NET Engineers

1. There is no auto-generated rollback — you must plan for it

EF Core’s Down() method creates an expectation that rollback is automatic and complete. In Prisma and Drizzle, you plan for rollback in advance:

  • Destructive changes (dropping columns, dropping tables) must be preceded by verifying the code no longer references them
  • Keep Down() SQL in a comment or companion file for reference
  • For production rollback, lean on database backups — they are reliable and fast
-- Prisma migration — include rollback SQL as a comment for reference
-- drizzle/0010_drop_legacy_column.sql

-- ROLLBACK (if needed): ALTER TABLE "orders" ADD COLUMN "legacy_field" text;

ALTER TABLE "orders" DROP COLUMN "legacy_field";

This is a convention you enforce, not a platform feature.

2. Never edit an applied migration file

This is the same rule as EF Core, but worth restating because the files are more tempting to edit (they are just SQL). If you edit a migration file that has already been applied to any database (dev, staging, production), you will cause a checksum mismatch.

Prisma computes a checksum of each migration file and stores it in _prisma_migrations. If the file changes after application, migrate deploy will error:

Error: Migration `20260115000000_add_status_to_orders` failed to apply cleanly to the shadow database.
Checksum verification failed.

If you need to change a migration that has not yet been deployed anywhere but yourself, delete the migration directory and regenerate. If it has been applied anywhere (even staging), write a new forward migration instead.

3. migrate dev vs migrate deploy — use the wrong one in production and you risk data loss

prisma migrate dev can trigger a database reset if it detects schema drift. Running it in production means it might drop your production database. This is not hypothetical — it has happened.

# Development only
npx prisma migrate dev         # interactive, may reset DB, regenerates client

# Production / CI only
npx prisma migrate deploy      # applies pending migrations, no prompts, no reset

Add a guard in your deployment scripts to prevent this:

#!/bin/bash
if [ "$NODE_ENV" = "production" ]; then
  npx prisma migrate deploy
else
  npx prisma migrate dev
fi

4. drizzle-kit push is not a migration tool

drizzle-kit push pushes your current schema directly to the database without creating migration files. It is intended for rapid prototyping on a throwaway database, similar to EF Core’s EnsureCreated().

Using push in any non-throwaway environment is dangerous: it may make destructive changes without a migration history, and you lose the ability to track what changed or reproduce the schema on a fresh database.

# Safe — prototyping only
npx drizzle-kit push

# Not safe — any environment where data matters
# Use generate + migrate instead
npx drizzle-kit generate && npx drizzle-kit migrate

5. Custom migration steps (data transforms, stored procedures) require manual SQL in Prisma/Drizzle

EF Core lets you write C# inside Up() and Down(). You can call services, use EF Core queries, or call stored procedures in a migration:

// EF Core — custom logic in migration
protected override void Up(MigrationBuilder migrationBuilder)
{
    migrationBuilder.AddColumn<string>("full_name", "users", "nvarchar(400)", nullable: true);

    // Call a custom stored procedure to backfill
    migrationBuilder.Sql("EXEC sp_BackfillFullName");

    // Or raw EF Core operations (via injected DbContext — requires custom IMigrationsSqlGenerator)
}

In Prisma and Drizzle, you add SQL directly to the migration file. If the logic is too complex for a SQL UPDATE statement, extract it to a PostgreSQL function and call it:

-- Prisma migration file — add your backfill SQL after the DDL
ALTER TABLE "users" ADD COLUMN "full_name" text;

-- Create a temporary function for the backfill
CREATE OR REPLACE FUNCTION backfill_full_name() RETURNS void AS $$
BEGIN
  UPDATE users
  SET full_name = first_name || ' ' || last_name
  WHERE full_name IS NULL;
END;
$$ LANGUAGE plpgsql;

SELECT backfill_full_name();

DROP FUNCTION backfill_full_name();

-- Now enforce NOT NULL after backfill
ALTER TABLE "users" ALTER COLUMN "full_name" SET NOT NULL;

This pattern — add nullable, backfill, then make NOT NULL — is the standard approach for adding required columns to populated tables without downtime.

6. The migration history table is in your database — production deployments need write access

prisma migrate deploy needs to read and write to _prisma_migrations. If your production database user has read-only permissions, migrations will fail. Create a dedicated migration user with the minimum permissions needed:

-- PostgreSQL: create a migration user
CREATE USER app_migrator WITH PASSWORD 'migration-password';
GRANT CONNECT ON DATABASE myapp TO app_migrator;
GRANT USAGE ON SCHEMA public TO app_migrator;
GRANT CREATE ON SCHEMA public TO app_migrator;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO app_migrator;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO app_migrator;

-- Application runtime user (read/write, no DDL)
CREATE USER app_runtime WITH PASSWORD 'runtime-password';
GRANT CONNECT ON DATABASE myapp TO app_runtime;
GRANT USAGE ON SCHEMA public TO app_runtime;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_runtime;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO app_runtime;

Use DATABASE_URL (the migrator user) only during migration steps in CI/CD, and DATABASE_URL (the runtime user) in the application itself. Keep them as separate secrets.


Hands-On Exercise

This exercise simulates a schema evolution across three migrations to practice the full Prisma workflow, including a data migration.

Starting schema (already exists in production):

model User {
  id        Int      @id @default(autoincrement())
  firstName String   @map("first_name")
  lastName  String   @map("last_name")
  email     String   @unique
  createdAt DateTime @default(now()) @map("created_at")

  @@map("users")
}

Migration 1: Add a fullName column and backfill it

Edit schema.prisma to add fullName:

model User {
  id        Int      @id @default(autoincrement())
  firstName String   @map("first_name")
  lastName  String   @map("last_name")
  fullName  String?  @map("full_name")   // nullable initially
  email     String   @unique
  createdAt DateTime @default(now()) @map("created_at")

  @@map("users")
}
npx prisma migrate dev --name add_full_name_column --create-only

The generated SQL will be:

ALTER TABLE "users" ADD COLUMN "full_name" TEXT;

Open the migration file and add the backfill:

ALTER TABLE "users" ADD COLUMN "full_name" TEXT;

-- Backfill existing rows
UPDATE "users" SET "full_name" = "first_name" || ' ' || "last_name";

Apply it:

npx prisma migrate dev

Migration 2: Make fullName required

Edit schema.prisma:

fullName  String   @map("full_name")   // remove the ?
npx prisma migrate dev --name make_full_name_required

Prisma generates:

ALTER TABLE "users" ALTER COLUMN "full_name" SET NOT NULL;

Migration 3: Add a role system

Add to schema.prisma:

model Role {
  id    Int    @id @default(autoincrement())
  name  String @unique
  users User[]

  @@map("roles")
}

model User {
  // ... existing fields
  roleId Int?   @map("role_id")
  role   Role?  @relation(fields: [roleId], references: [id])
}
npx prisma migrate dev --name add_roles --create-only

Edit the generated SQL to seed the default role and assign it to existing users:

-- Create the roles table
CREATE TABLE "roles" (
    "id" SERIAL NOT NULL,
    "name" TEXT NOT NULL,
    CONSTRAINT "roles_pkey" PRIMARY KEY ("id")
);
CREATE UNIQUE INDEX "roles_name_key" ON "roles"("name");

-- Add foreign key column to users
ALTER TABLE "users" ADD COLUMN "role_id" INTEGER;

-- Seed default roles
INSERT INTO "roles" ("name") VALUES ('user'), ('admin'), ('viewer');

-- Assign all existing users to the 'user' role
UPDATE "users" SET "role_id" = (SELECT "id" FROM "roles" WHERE "name" = 'user');

-- Add foreign key constraint
ALTER TABLE "users" ADD CONSTRAINT "users_role_id_fkey"
    FOREIGN KEY ("role_id") REFERENCES "roles"("id") ON DELETE SET NULL ON UPDATE CASCADE;
npx prisma migrate dev

Verify the state:

npx prisma migrate status

Quick Reference

# Prisma migration commands
npx prisma migrate dev --name <name>       # create + apply (development)
npx prisma migrate dev --create-only       # create SQL file, do not apply
npx prisma migrate dev                     # apply existing unapplied migrations
npx prisma migrate deploy                  # apply pending (production/CI)
npx prisma migrate status                  # show applied/pending migrations
npx prisma migrate reset                   # drop DB, re-apply all, seed (dev only)
npx prisma migrate resolve --applied <id>  # mark migration as applied (baseline)
npx prisma migrate resolve --rolled-back <id> # mark migration as rolled back
npx prisma db pull                         # introspect DB -> schema.prisma
npx prisma db push                         # push schema to DB without migrations (prototyping)
npx prisma db seed                         # run seed script
npx prisma generate                        # regenerate TypeScript client

# Drizzle migration commands
npx drizzle-kit generate                   # generate SQL migration from schema changes
npx drizzle-kit migrate                    # apply pending migrations
npx drizzle-kit push                       # push schema directly (prototyping only)
npx drizzle-kit introspect                 # generate schema.ts from existing DB
npx drizzle-kit studio                     # open data browser
npx drizzle-kit check                      # check for schema/migration inconsistencies
// Drizzle: programmatic migration (for startup scripts, tests)
import { migrate } from 'drizzle-orm/postgres-js/migrator';
await migrate(db, { migrationsFolder: './drizzle' });

// Prisma: check migration status programmatically
import { PrismaClient } from '@prisma/client';
const prisma = new PrismaClient();
// Use $executeRaw to check _prisma_migrations if needed
Migration file locations:
  Prisma:  prisma/migrations/<timestamp>_<name>/migration.sql
  Drizzle: drizzle/<number>_<name>.sql
           drizzle/meta/_journal.json
           drizzle/meta/<number>_snapshot.json

Migration tracking tables:
  Prisma:  _prisma_migrations
  Drizzle: __drizzle_migrations
  EF Core: __EFMigrationsHistory

Production deployment checklist:

  • Run prisma migrate deploy (or drizzle-kit migrate) before starting new app version
  • Use the migrator database user (DDL permissions), not the runtime user
  • Keep database backups before any destructive migration
  • Test migration against a staging database first
  • Verify migration status after deployment
  • Never use migrate dev or db push in production
  • For large tables: test migration timing locally, consider CREATE INDEX CONCURRENTLY
  • For non-nullable column additions: backfill data in migration SQL before setting NOT NULL

Further Reading

Query Patterns: LINQ vs. Prisma/Drizzle Queries

For .NET engineers who know: LINQ, EF Core query composition, Include/ThenInclude, and the difference between IQueryable and IEnumerable You’ll learn: How every common LINQ pattern maps to Prisma and Drizzle, including pagination, aggregations, raw SQL, and N+1 prevention Time: 15-20 min


The .NET Way (What You Already Know)

LINQ is the backbone of EF Core data access. You compose queries against IQueryable<T>, and EF Core translates the expression tree into SQL at execution time. The query does not run until you materialize it — with ToListAsync(), FirstOrDefaultAsync(), CountAsync(), etc.

// Simple filter + projection
var users = await _db.Users
    .Where(u => u.IsActive && u.Age >= 18)
    .Select(u => new UserDto { Id = u.Id, Name = u.Name, Email = u.Email })
    .OrderBy(u => u.Name)
    .ToListAsync();

// Navigation properties with Include
var orders = await _db.Orders
    .Where(o => o.Status == OrderStatus.Pending)
    .Include(o => o.Customer)
    .Include(o => o.LineItems)
        .ThenInclude(li => li.Product)
    .ToListAsync();

// Aggregations
var stats = await _db.Orders
    .Where(o => o.CreatedAt >= DateTime.UtcNow.AddDays(-30))
    .GroupBy(o => o.CustomerId)
    .Select(g => new {
        CustomerId = g.Key,
        OrderCount = g.Count(),
        TotalSpend = g.Sum(o => o.Total)
    })
    .ToListAsync();

// Existence check
bool hasOverdueInvoices = await _db.Invoices
    .AnyAsync(i => i.DueDate < DateTime.UtcNow && i.Status != InvoiceStatus.Paid);

// Offset pagination
var page = await _db.Products
    .OrderBy(p => p.Name)
    .Skip((pageNumber - 1) * pageSize)
    .Take(pageSize)
    .ToListAsync();

Three things are built into this model that will not be present by default in the TS ORMs:

  1. Change tracking — EF Core tracks every entity you load. Mutations detected at SaveChanges() time.
  2. Lazy loading — navigation properties can be loaded automatically on first access (when configured).
  3. Expression tree translationWhere(u => u.Name.Contains(query)) becomes SQL LIKE; it never executes in memory.

The TypeScript Way

Prisma

Prisma queries are structured as plain method calls on the Prisma Client, typed against the schema you define in schema.prisma. The API is not a query builder in the SQL sense — it is a constraint-typed object API. You express what you want, not how to get it.

Where (filter)

// Prisma
const users = await prisma.user.findMany({
  where: {
    isActive: true,
    age: { gte: 18 },
  },
  select: {
    id: true,
    name: true,
    email: true,
  },
  orderBy: { name: 'asc' },
});
// EF Core equivalent
var users = await _db.Users
    .Where(u => u.IsActive && u.Age >= 18)
    .Select(u => new { u.Id, u.Name, u.Email })
    .OrderBy(u => u.Name)
    .ToListAsync();

Include (eager loading relations)

// Prisma — include replaces Include/ThenInclude
const orders = await prisma.order.findMany({
  where: { status: 'PENDING' },
  include: {
    customer: true,
    lineItems: {
      include: {
        product: true,
      },
    },
  },
});
// EF Core
var orders = await _db.Orders
    .Where(o => o.Status == OrderStatus.Pending)
    .Include(o => o.Customer)
    .Include(o => o.LineItems).ThenInclude(li => li.Product)
    .ToListAsync();

Select vs. include — choose one per query

In Prisma, select and include are mutually exclusive at the top level of a query. select gives you explicit field picking (like a projection); include adds related records on top of all scalar fields. If you want related records and also limit which scalar fields come back, nest select inside include.

// Nested select inside include
const orders = await prisma.order.findMany({
  include: {
    customer: {
      select: { id: true, name: true }, // only these fields from customer
    },
  },
});

Aggregations

// Count
const count = await prisma.user.count({ where: { isActive: true } });

// Aggregate (sum, avg, min, max)
const result = await prisma.order.aggregate({
  _sum: { total: true },
  _count: { id: true },
  where: {
    createdAt: { gte: new Date(Date.now() - 30 * 24 * 60 * 60 * 1000) },
  },
});
// result._sum.total, result._count.id

// GroupBy — closest to LINQ GroupBy + Select
const stats = await prisma.order.groupBy({
  by: ['customerId'],
  _count: { id: true },
  _sum: { total: true },
  where: {
    createdAt: { gte: new Date(Date.now() - 30 * 24 * 60 * 60 * 1000) },
  },
});
// EF Core GroupBy equivalent
var stats = await _db.Orders
    .Where(o => o.CreatedAt >= DateTime.UtcNow.AddDays(-30))
    .GroupBy(o => o.CustomerId)
    .Select(g => new { CustomerId = g.Key, Count = g.Count(), Total = g.Sum(o => o.Total) })
    .ToListAsync();

Any / All / Exists

// Any — does at least one match exist?
const hasOverdue = await prisma.invoice.count({
  where: {
    dueDate: { lt: new Date() },
    status: { not: 'PAID' },
  },
}) > 0;

// More efficiently with findFirst (stops at the first match)
const firstOverdue = await prisma.invoice.findFirst({
  where: { dueDate: { lt: new Date() }, status: { not: 'PAID' } },
  select: { id: true }, // minimise data transfer
});
const hasOverdue2 = firstOverdue !== null;

Prisma has no dedicated .exists() or .any() method. findFirst with a minimal select is the idiomatic pattern.

Skip / Take — offset pagination

// Prisma offset pagination
const page = await prisma.product.findMany({
  orderBy: { name: 'asc' },
  skip: (pageNumber - 1) * pageSize,
  take: pageSize,
});

Cursor pagination (preferred for large datasets)

Offset pagination degrades as offsets grow — the database must scan and skip rows. Cursor pagination is stable and scales.

// First page — no cursor
const firstPage = await prisma.product.findMany({
  take: 20,
  orderBy: { id: 'asc' },
});

// Subsequent pages — pass the last ID as cursor
const nextPage = await prisma.product.findMany({
  take: 20,
  cursor: { id: lastId },
  skip: 1, // skip the cursor row itself
  orderBy: { id: 'asc' },
});

Raw SQL

// Tagged template literal — safe parameterization
const users = await prisma.$queryRaw<User[]>`
  SELECT * FROM "User"
  WHERE email ILIKE ${'%' + domain}
  LIMIT 100
`;

// Unsafe raw (no parameterization — use only for dynamic identifiers you control)
const tableName = 'User';
const result = await prisma.$queryRawUnsafe(
  `SELECT COUNT(*) FROM "${tableName}"`
);

// Execute (no return value — UPDATE, DELETE, DDL)
const affected = await prisma.$executeRaw`
  UPDATE "User" SET "lastLoginAt" = NOW()
  WHERE id = ${userId}
`;

Drizzle

Drizzle takes a different philosophy: it is a query builder that looks like SQL and generates SQL you can inspect. The API is composable and closer to what you would write by hand.

import { db } from './db';
import { users, orders, lineItems, products } from './schema';
import { eq, gte, and, like, sql, count, sum, desc } from 'drizzle-orm';

// Where + select + orderBy
const result = await db
  .select({ id: users.id, name: users.name, email: users.email })
  .from(users)
  .where(and(eq(users.isActive, true), gte(users.age, 18)))
  .orderBy(users.name);

// Join (Drizzle uses explicit joins, not include)
const ordersWithCustomers = await db
  .select({
    orderId: orders.id,
    status: orders.status,
    customerName: users.name,
  })
  .from(orders)
  .innerJoin(users, eq(orders.customerId, users.id))
  .where(eq(orders.status, 'PENDING'));

// Aggregations
const stats = await db
  .select({
    customerId: orders.customerId,
    orderCount: count(orders.id),
    totalSpend: sum(orders.total),
  })
  .from(orders)
  .groupBy(orders.customerId);

// Skip / Take
const page = await db
  .select()
  .from(products)
  .orderBy(products.name)
  .offset((pageNumber - 1) * pageSize)
  .limit(pageSize);

// Raw SQL with Drizzle's sql`` tag
const raw = await db.execute(
  sql`SELECT * FROM users WHERE email ILIKE ${'%' + domain}`
);

Key Differences

LINQ / EF CorePrismaDrizzle
Where(predicate)where: { field: value }.where(eq(table.field, value))
Select(projection)select: { field: true }.select({ alias: table.field })
Include(nav)include: { relation: true }Explicit .innerJoin() / .leftJoin()
ThenInclude(nav)Nested includeAdditional .join() calls
OrderBy(expr)orderBy: { field: 'asc' }.orderBy(asc(table.field))
GroupBy(key)groupBy: ['field'].groupBy(table.field)
AnyAsync(pred)findFirst + null check.limit(1) + null check
CountAsync()count()count() aggregate
Skip(n).Take(m)skip: n, take: m.offset(n).limit(m)
Cursor paginationcursor: { id: x }, skip: 1Manual where id > cursor
FromSqlRaw(sql)$queryRaw\…``db.execute(sql\…`)`
Change trackingNoneNone
Lazy loadingNone (must be explicit)None

Gotchas for .NET Engineers

1. No change tracking — mutations are always explicit

In EF Core, loading an entity and modifying a property is enough. SaveChanges() detects the diff and issues an UPDATE. Neither Prisma nor Drizzle tracks anything. Every update requires an explicit update call with the changed fields.

// WRONG mental model (will not work)
const user = await prisma.user.findUnique({ where: { id: userId } });
user.name = 'New Name'; // modifying the JS object does nothing
await prisma.save(); // this method does not exist

// CORRECT — always issue an explicit update
await prisma.user.update({
  where: { id: userId },
  data: { name: 'New Name' },
});

2. Prisma select and include cannot both appear at the query root

This will throw a runtime error, not a compile-time error in older Prisma versions:

// WRONG — mutually exclusive at the top level
const order = await prisma.order.findUnique({
  where: { id: orderId },
  select: { id: true, total: true },
  include: { customer: true }, // runtime error
});

// CORRECT — nest select inside include, or use include only
const order = await prisma.order.findUnique({
  where: { id: orderId },
  include: {
    customer: { select: { id: true, name: true } },
  },
});

3. N+1 is not prevented automatically

EF Core with Include() issues a single JOIN query (or a split query with AsSplitQuery()). Prisma issues one query per include level — which is efficient for small sets but becomes N+1 if you fetch a list and then access relations inside a loop:

// N+1 — DO NOT do this
const orders = await prisma.order.findMany({ where: { status: 'PENDING' } });
for (const order of orders) {
  // This issues a new DB query per iteration — N+1
  const customer = await prisma.user.findUnique({ where: { id: order.customerId } });
  console.log(customer.name);
}

// CORRECT — fetch with include in one query
const orders = await prisma.order.findMany({
  where: { status: 'PENDING' },
  include: { customer: true },
});
for (const order of orders) {
  console.log(order.customer.name); // no additional query
}

In Drizzle, this is even more explicit because you write joins yourself — but the same mistake is possible if you loop and query inside a map.

4. $queryRaw returns unknown[], not your model type

EF Core’s FromSqlRaw<T> returns properly typed results. Prisma’s $queryRaw returns unknown[] at runtime. The generic <User[]> annotation is a cast, not a runtime check.

// This compiles, but the cast is not validated at runtime
const users = await prisma.$queryRaw<User[]>`SELECT * FROM "User"`;
// users[0].email exists in TS types but could be undefined at runtime
// if your SQL doesn't select that column

// Safer: use Zod to validate the raw result
import { z } from 'zod';
const UserSchema = z.object({ id: z.string(), email: z.string() });
const raw = await prisma.$queryRaw`SELECT id, email FROM "User"`;
const users = z.array(UserSchema).parse(raw);

5. Prisma groupBy does not support computed fields in having directly

Complex HAVING clauses often require raw SQL in Prisma. EF Core’s LINQ-to-SQL translator handles most HAVING expressions naturally.

6. EXPLAIN ANALYZE — use it

When a Prisma or Drizzle query is slow, check the query plan. Prisma logs queries in development with log: ['query']; Drizzle can be wrapped with a logger.

// Prisma — enable query logging
const prisma = new PrismaClient({
  log: ['query', 'warn', 'error'],
});

// Then inspect slow queries with EXPLAIN ANALYZE in psql:
// EXPLAIN ANALYZE SELECT * FROM "Order" WHERE "customerId" = '...' ORDER BY "createdAt" DESC;

The most common findings:

  • A missing index on a foreign key column (Prisma does not create these automatically).
  • A sequential scan on a large table caused by a filter on a non-indexed column.
  • An unexpected Nested Loop join from a sub-optimal Prisma include chain.

Hands-On Exercise

You have this EF Core query in a .NET API. Rewrite it twice: once using Prisma, once using Drizzle.

// Find all active customers who placed at least one order in the last 90 days,
// along with their most recent order and the count of total orders.
var results = await _db.Customers
    .Where(c => c.IsActive)
    .Where(c => c.Orders.Any(o => o.CreatedAt >= DateTime.UtcNow.AddDays(-90)))
    .Select(c => new {
        c.Id,
        c.Name,
        c.Email,
        TotalOrders = c.Orders.Count(),
        LatestOrder = c.Orders
            .OrderByDescending(o => o.CreatedAt)
            .Select(o => new { o.Id, o.Total, o.CreatedAt })
            .FirstOrDefault()
    })
    .OrderBy(c => c.Name)
    .ToListAsync();

Tasks:

  1. Write the Prisma version. Note that Any() has no direct equivalent — decide how you handle it.
  2. Write the Drizzle version using explicit joins and aggregations.
  3. Add EXPLAIN ANALYZE output for your Prisma version and identify which columns need indexes.
  4. Rewrite the Prisma version to use cursor pagination instead of offset pagination.
  5. Add Zod validation for the raw SQL fallback if Prisma’s groupBy cannot express the query you need.

Quick Reference

LINQ / EF CorePrismaDrizzle
.Where(u => u.IsActive)where: { isActive: true }.where(eq(users.isActive, true))
.Where(u => u.Age >= 18)where: { age: { gte: 18 } }.where(gte(users.age, 18))
.Where(a && b)where: { AND: [...] } or nested.where(and(cond1, cond2))
.Where(a || b)where: { OR: [...] }.where(or(cond1, cond2))
.Select(u => new {...})select: { field: true }.select({ alias: table.col })
.Include(o => o.Customer)include: { customer: true }.innerJoin(customers, eq(...))
.ThenInclude(li => li.Product)Nested include: { product: true }Additional .join()
.OrderBy(u => u.Name)orderBy: { name: 'asc' }.orderBy(asc(users.name))
.OrderByDescending(...)orderBy: { name: 'desc' }.orderBy(desc(users.name))
.GroupBy(u => u.Status)groupBy: ['status'].groupBy(users.status)
.Count()_count: { id: true }count(table.id)
.Sum(o => o.Total)_sum: { total: true }sum(orders.total)
.AnyAsync(pred)findFirst + !== null.limit(1) + !== undefined
.Skip(n).Take(m)skip: n, take: m.offset(n).limit(m)
Cursor paginationcursor: { id }, skip: 1where(gt(table.id, cursor))
FromSqlRaw<T>(sql)prisma.$queryRaw<T>`sql`db.execute(sql`...`)
Change trackingNone — explicit updates requiredNone — explicit updates required
AsNoTracking()Default behaviorDefault behavior
Lazy loadingNoneNone

Common Prisma filter operators:

OperatorPrismaSQL equivalent
Equals{ field: value }= value
Not equals{ field: { not: value } }!= value
Greater than{ field: { gt: value } }> value
Greater or equal{ field: { gte: value } }>= value
Less than{ field: { lt: value } }< value
Less or equal{ field: { lte: value } }<= value
In list{ field: { in: [...] } }IN (...)
Not in list{ field: { notIn: [...] } }NOT IN (...)
Contains{ field: { contains: 'x' } }LIKE '%x%'
Starts with{ field: { startsWith: 'x' } }LIKE 'x%'
Is null{ field: null }IS NULL

Further Reading

  • Prisma Client API reference — the canonical reference for all query options, filter operators, and aggregation APIs
  • Prisma — Relation queries — detailed coverage of include, nested select, and how Prisma issues queries for each
  • Drizzle ORM documentation — covers the full query builder API, joins, and raw SQL
  • Prisma — Pagination — offset vs. cursor pagination with benchmarks and guidance on when each applies
  • Use the Index, Luke — database indexing fundamentals that apply equally when diagnosing slow Prisma or Drizzle queries; the “Slow Indexes” chapter is particularly relevant

Connection Management and Pooling

For .NET engineers who know: ADO.NET connection pooling, SqlConnection, IDbContextFactory<T>, and connection string configuration You’ll learn: How Node.js connection pooling differs from ADO.NET’s automatic model, why serverless deployments break standard pooling, and when PgBouncer is not optional Time: 15-20 min


The .NET Way (What You Already Know)

ADO.NET connection pooling is automatic and transparent. You never think about it. You call new SqlConnection(connectionString), open it, use it, and dispose it. The pool manages everything underneath.

// You write this
await using var connection = new SqlConnection(connectionString);
await connection.OpenAsync();
var result = await connection.QueryAsync<User>("SELECT * FROM Users WHERE Id = @Id", new { Id = id });

// The pool handles:
// - Maintaining a reusable set of open TCP connections
// - Handing a connection to you from the pool (or creating a new one)
// - Returning the connection to the pool on Dispose()
// - Validating the connection is still alive
// - Closing idle connections after timeout

With EF Core, DbContext wraps this further:

// In Program.cs
builder.Services.AddDbContext<AppDbContext>(options =>
    options.UseNpgsql(connectionString, npgsql => {
        npgsql.CommandTimeout(30);
        // Pool size defaults: min 0, max 100 per process
    })
);

// In your service — the DI container manages lifetime (Scoped by default)
public class UserService
{
    private readonly AppDbContext _db;
    public UserService(AppDbContext db) { _db = db; }

    public async Task<User?> GetUser(int id) =>
        await _db.Users.FindAsync(id);
}

With Npgsql (the .NET PostgreSQL driver), the pool behavior is:

ParameterDefaultWhat it means
Minimum Pool Size0Connections to keep alive when idle
Maximum Pool Size100Hard ceiling on concurrent connections
Connection Idle Lifetime300sHow long before an idle connection is closed
Connection Pruning Interval10sHow often idle connections are checked

ASP.NET Core registers DbContext as Scoped — one instance per HTTP request, disposed at request end. The underlying connection returns to the pool on disposal. From a pool perspective, your application uses at most one connection per in-flight request, bounded by MaxPoolSize.


The Node.js Way

Why Explicit Configuration is Required

Node.js has no built-in connection pooling equivalent to ADO.NET. Each PostgreSQL library ships its own pool implementation, and you configure it explicitly. The good news: the defaults are reasonable for long-running servers. The bad news: serverless environments break the entire model.

Prisma Connection Pooling

Prisma embeds the pg driver internally. The connection pool is configured via query parameters on the DATABASE_URL connection string or via PrismaClient options.

// .env
DATABASE_URL="postgresql://user:password@host:5432/mydb?connection_limit=10&pool_timeout=20"

// Or via the generator in schema.prisma (Prisma 5+)
// datasource db {
//   provider = "postgresql"
//   url      = env("DATABASE_URL")
// }

// PrismaClient instantiation — usually one instance per process
import { PrismaClient } from '@prisma/client';

const prisma = new PrismaClient({
  log: process.env.NODE_ENV === 'development' ? ['query', 'warn', 'error'] : ['error'],
  // datasources override (rarely needed — prefer DATABASE_URL params)
});

export default prisma;

Singleton pattern is mandatory. Unlike DbContext in .NET (scoped per request), PrismaClient should be a process-level singleton. Each instance opens its own pool. Creating one per request is a severe bug — you will exhaust your database’s connection limit almost immediately.

// WRONG — one PrismaClient per request
app.get('/users', async (req, res) => {
  const prisma = new PrismaClient(); // opens a new pool connection set
  const users = await prisma.user.findMany();
  await prisma.$disconnect(); // wastes time on every request
  res.json(users);
});

// CORRECT — module-level singleton
// lib/prisma.ts
import { PrismaClient } from '@prisma/client';

const globalForPrisma = globalThis as unknown as { prisma: PrismaClient };

export const prisma =
  globalForPrisma.prisma ??
  new PrismaClient({
    log: ['error'],
  });

// In development, Next.js hot reload creates new module instances.
// The globalThis trick preserves the singleton across hot reloads.
if (process.env.NODE_ENV !== 'production') globalForPrisma.prisma = prisma;

Prisma connection pool parameters:

ParameterConnection string keyDefaultRecommendation
Pool sizeconnection_limitnum_cpus * 2 + 1Match to DB max_connections minus headroom
Pool timeoutpool_timeout10sRaise to 20-30s for slow queries
Connection timeoutconnect_timeout5sFine for most setups
Socket timeoutsocket_timeoutNoneSet to 30s in production
# Example for a Render PostgreSQL free tier (max 25 connections)
DATABASE_URL="postgresql://user:pass@host/db?connection_limit=5&pool_timeout=20"

Drizzle Connection Options

Drizzle is a query builder — it does not own the pool. You provide the pool yourself via node-postgres (pg) or postgres.js.

// Using node-postgres (pg)
import { Pool } from 'pg';
import { drizzle } from 'drizzle-orm/node-postgres';
import * as schema from './schema';

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  max: 10,                // Maximum pool size
  idleTimeoutMillis: 30_000,
  connectionTimeoutMillis: 5_000,
  ssl: process.env.NODE_ENV === 'production' ? { rejectUnauthorized: false } : false,
});

export const db = drizzle(pool, { schema });

// Using postgres.js (better performance, native binary support)
import postgres from 'postgres';
import { drizzle } from 'drizzle-orm/postgres-js';

const sql = postgres(process.env.DATABASE_URL!, {
  max: 10,
  idle_timeout: 30,
  connect_timeout: 5,
});

export const db = drizzle(sql, { schema });

Pool size comparison:

// node-postgres (pg) Pool options
const pool = new Pool({
  max: 10,                    // Max connections (default: 10)
  min: 2,                     // Minimum idle connections (default: 0)
  idleTimeoutMillis: 30_000,  // Remove idle connections after 30s
  connectionTimeoutMillis: 2_000, // Throw if can't acquire in 2s
  allowExitOnIdle: true,      // Allow process to exit when pool is idle (useful for scripts)
});

// postgres.js options
const sql = postgres(url, {
  max: 10,           // Max connections
  idle_timeout: 30,  // Seconds before closing idle connections
  max_lifetime: 3600, // Max lifetime of a connection in seconds (avoids stale connections)
  connect_timeout: 5, // Seconds to wait for connection
  prepare: true,      // Use prepared statements (better performance)
});

PgBouncer — The External Pooler

PgBouncer sits between your application and PostgreSQL and multiplexes many short-lived application connections onto a smaller number of real server connections. For serverless and edge deployments, it is effectively required.

graph LR
    subgraph without["Without PgBouncer"]
        L1a["Lambda function #1"]
        L2a["Lambda function #2"]
        LNa["Lambda function #N"]
        PG1a["PostgreSQL\n(connection 1)"]
        PG2a["PostgreSQL\n(connection 2)"]
        PGNa["PostgreSQL\n(connection N)"]
        L1a --> PG1a
        L2a --> PG2a
        LNa --> PGNa
    end

    subgraph with["With PgBouncer"]
        L1b["Lambda function #1"]
        L2b["Lambda function #2"]
        L3b["Lambda function #3"]
        LNb["Lambda function #N"]
        PGB["PgBouncer"]
        PG1b["PostgreSQL\n(connection 1)"]
        PG2b["PostgreSQL\n(connection 2, reused)"]
        L1b --> PGB
        L2b --> PGB
        L3b --> PGB
        LNb --> PGB
        PGB --> PG1b
        PGB --> PG2b
    end

PgBouncer modes:

ModeHow it worksUse case
sessionOne server connection per client sessionLong-lived connections, full PostgreSQL feature support
transactionServer connection released after each transactionServerless, high concurrency — most common choice
statementReleased after each statementRarely used; breaks multi-statement transactions

transaction mode is what you want for serverless. One important constraint: named prepared statements and SET session variables do not survive across transactions in transaction mode. Prisma’s Accelerate and Supabase’s Supavisor handle this transparently.

Render PostgreSQL free tier limits:

Render’s free tier allows 25 connections maximum. This is a hard limit at the PostgreSQL server level — not a soft pool limit.

# Render free tier allocation strategy:
# - Reserve 5 connections for admin/monitoring
# - Give your Node.js app a pool of 10
# - Give your dev/migration tooling 5
# - Leave 5 in reserve

DATABASE_URL="postgresql://user:pass@host/db?connection_limit=10&pool_timeout=20"

If you deploy to a serverless platform (Vercel, AWS Lambda, Cloudflare Workers), add PgBouncer or use Prisma Accelerate:

// Prisma Accelerate (Prisma's managed connection pooler + query cache)
// schema.prisma: datasource db { url = env("PRISMA_ACCELERATE_URL") }
// Uses the @prisma/extension-accelerate package

import { PrismaClient } from '@prisma/client';
import { withAccelerate } from '@prisma/extension-accelerate';

const prisma = new PrismaClient().$extends(withAccelerate());

// With caching
const user = await prisma.user.findUnique({
  where: { id: userId },
  cacheStrategy: { ttl: 60, swr: 30 }, // cache for 60s, stale-while-revalidate 30s
});

Monitoring Connection Usage

// Monitor pool health with node-postgres
pool.on('connect', () => console.log('New DB connection created'));
pool.on('acquire', () => console.log('Connection acquired from pool'));
pool.on('remove', () => console.log('Connection removed from pool'));

// Log pool stats periodically
setInterval(() => {
  console.log({
    totalCount: pool.totalCount,
    idleCount: pool.idleCount,
    waitingCount: pool.waitingCount,
  });
}, 30_000);

// With Prisma — no direct pool introspection API
// Use pg_stat_activity in PostgreSQL instead:
// SELECT count(*), state, wait_event_type
// FROM pg_stat_activity
// WHERE datname = 'mydb'
// GROUP BY state, wait_event_type;

Key Differences

ConcernADO.NET / Npgsql / EF CorePrismaDrizzle + pg
Pool managementAutomatic, transparentBuilt-in, config via URL paramsManual — you create the Pool
DbContext lifetimeScoped per request (DI)Module-level singletonModule-level singleton
Hot reload safeYes (DI manages it)Requires globalThis trickSame
ServerlessWorks (connections held briefly)Breaks without Accelerate/PgBouncerBreaks without PgBouncer
Pool size default100 (Npgsql)num_cpus * 2 + 110 (pg), 10 (postgres.js)
External poolerPgBouncer optionalAccelerate or PgBouncer for serverlessPgBouncer for serverless
Pool monitoringCounters on NpgsqlDataSourcepg_stat_activity onlypool.totalCount etc.
Connection validationAutomaticAutomatic (retry on failure)Manual (pg validates on acquire)

Gotchas for .NET Engineers

1. Creating PrismaClient per request is a pool-exhausting bug

The single most common mistake from engineers coming from EF Core’s scoped DbContext model. DbContext is cheap to construct — the underlying pool is separate and managed by Npgsql. PrismaClient creates its own pool on instantiation. One per request means hundreds of pools, each trying to open connections, immediately overwhelming your PostgreSQL’s max_connections.

The fix is the module-level singleton shown above. In NestJS, register PrismaClient as a provider with module scope, not request scope.

// NestJS — correct registration
@Module({
  providers: [
    {
      provide: PrismaClient,
      useFactory: () => {
        const prisma = new PrismaClient();
        return prisma;
      },
    },
  ],
  exports: [PrismaClient],
})
export class DatabaseModule {}

2. Serverless cold starts create new pool connections on every invocation

In .NET, a long-running process holds its pool for the lifetime of the app. In serverless (AWS Lambda, Vercel Edge Functions), each invocation may spin up a fresh process, establish new connections, and then the process is frozen or terminated. Even with a module-level singleton, a cold start means new TCP connections to PostgreSQL.

At low concurrency this is tolerable. At high concurrency, hundreds of simultaneous cold starts each open their own pool connections, spiking past the database’s max_connections limit instantly. PgBouncer or a managed pooler (Prisma Accelerate, Supabase Supavisor) is the correct mitigation.

// Vercel / Lambda — with Prisma Accelerate
// The connection pooling happens at Accelerate's edge layer,
// not at the Lambda level. Your function connects to Accelerate,
// not directly to PostgreSQL.
const prisma = new PrismaClient({
  datasources: {
    db: { url: process.env.PRISMA_ACCELERATE_URL },
  },
}).$extends(withAccelerate());

3. PgBouncer transaction mode breaks session-level features

If you rely on PostgreSQL advisory locks, SET LOCAL, temporary tables, or prepared statements across a session, PgBouncer in transaction mode will silently break them. Your application code assumes it has a stable server connection for the duration of a session; PgBouncer reassigns the server connection after each transaction.

-- This DOES NOT work with PgBouncer transaction mode:
-- The SET applies to the server connection, which is returned to the pool
-- after the transaction. The next statement may get a different server connection.
SET search_path TO tenant_schema;
SELECT * FROM users;  -- may run on a connection with default search_path

If you need session-level features, use session mode or avoid PgBouncer for those specific connection paths.

4. Node.js needs fewer connections than you expect — but serverless needs a pooler anyway

The single-threaded event loop in Node.js means your application can handle hundreds of concurrent requests on a handful of DB connections. Ten to twenty connections is often sufficient for a busy long-running Node.js server. This is very different from ASP.NET Core, where each thread needs its own connection and you might want 50-100 connections for a busy API.

The paradox: Node.js needs fewer connections per server instance, but serverless means many server instances, each wanting their own pool. PgBouncer resolves this by acting as the single pooled gateway.

5. Pool exhaustion silently degrades to queuing, then errors

When ADO.NET’s pool is exhausted and Max Pool Size is reached, further connection requests queue and eventually throw InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool.

In node-postgres, when max connections are in use, new requests queue. When connectionTimeoutMillis is exceeded, they reject with an error. Without explicit connectionTimeoutMillis, the wait is indefinite — your requests will hang forever rather than failing fast.

// Always set connectionTimeoutMillis
const pool = new Pool({
  max: 10,
  connectionTimeoutMillis: 5_000, // fail after 5s rather than hang forever
});

Hands-On Exercise

You are moving a .NET API to a Node.js/Prisma stack and deploying to Vercel (serverless). The PostgreSQL database is on Render’s free tier (25 connections max). There is also a CI pipeline that runs migrations and seeds data.

Tasks:

  1. Calculate the connection budget: Render allows 25 connections. How would you allocate them between the Vercel production deployment, staging, CI, and a DBA’s direct access? Write out the allocation.

  2. Configure DATABASE_URL with appropriate connection_limit and pool_timeout for each environment (production, staging, CI).

  3. Implement the PrismaClient singleton pattern that is safe for Next.js hot reload in development.

  4. The app is hitting max_connections in production during traffic spikes. Outline the steps to add PgBouncer on Render. What mode would you choose, and what feature does it break that requires a code change?

  5. Write a health check endpoint that queries pg_stat_activity to report current connection usage, and returns a 503 if usage exceeds 80% of your budget.


Quick Reference

Concern.NET (Npgsql/EF Core)PrismaDrizzle (pg)
Pool size configMax Pool Size=100 in conn string?connection_limit=10 in URLnew Pool({ max: 10 })
Pool timeoutTimeout=30?pool_timeout=20connectionTimeoutMillis: 5000
Singleton patternDI Scoped DbContextglobalThis singletonModule-level Pool
Serverless solutionWorks (long-lived process)Prisma Accelerate or PgBouncerPgBouncer
External pooler modesPgBouncer session/transactionPgBouncer or AcceleratePgBouncer session/transaction
Monitor connectionsNpgsqlDataSource metricspg_stat_activity SQLpool.totalCount / pg_stat_activity
Disconnect cleanlyDispose()prisma.$disconnect()pool.end()

Useful PostgreSQL queries for connection monitoring:

-- Current connections by state and application
SELECT
  application_name,
  state,
  count(*)
FROM pg_stat_activity
WHERE datname = current_database()
GROUP BY application_name, state
ORDER BY count DESC;

-- Connections waiting (potential pool exhaustion signal)
SELECT count(*) FROM pg_stat_activity
WHERE datname = current_database()
  AND wait_event_type = 'Client'
  AND state = 'idle in transaction';

-- Kill idle connections older than 10 minutes
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = current_database()
  AND state = 'idle'
  AND query_start < NOW() - INTERVAL '10 minutes';

PgBouncer pgbouncer.ini minimal config for transaction mode:

[databases]
mydb = host=db.render.com port=5432 dbname=mydb

[pgbouncer]
listen_port = 6432
listen_addr = *
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 200
default_pool_size = 15
min_pool_size = 2
reserve_pool_size = 3
server_idle_timeout = 600
log_connections = 1
log_disconnections = 1

Further Reading

  • Prisma — Connection management — the official guide covering singleton patterns, connection limits, and serverless considerations
  • Prisma Accelerate — Prisma’s managed connection pooler with edge caching; the zero-config solution for serverless Prisma deployments
  • node-postgres Pool documentation — pool configuration options, event handlers, and lifecycle management
  • PgBouncer documentation — full configuration reference; pay particular attention to pool modes and their trade-offs
  • Supabase Supavisor — Supabase’s modern Elixir-based pooler that handles both session and transaction modes correctly across serverless workloads

Database Testing: LocalDB vs. Docker and Testcontainers

For .NET engineers who know: SQL Server LocalDB, EF Core’s in-memory provider, xUnit, and integration testing with WebApplicationFactory You’ll learn: How the TypeScript ecosystem replaces LocalDB with real PostgreSQL in Docker, runs integration tests against isolated containers, and seeds test data with factory libraries Time: 15-20 min


The .NET Way (What You Already Know)

.NET gives you two main paths for database testing without touching production:

SQL Server LocalDB — a minimal SQL Server instance that runs as a user process. Integration tests connect to it with a LocalDB-specific connection string, run migrations, and test against real SQL Server behavior.

// appsettings.Testing.json
{
  "ConnectionStrings": {
    "Default": "Server=(localdb)\\mssqllocaldb;Database=MyApp_Test;Trusted_Connection=True;"
  }
}

// Integration test with WebApplicationFactory
public class UserApiTests : IAsyncLifetime
{
    private readonly WebApplicationFactory<Program> _factory;
    private AppDbContext _db = null!;

    public UserApiTests()
    {
        _factory = new WebApplicationFactory<Program>()
            .WithWebHostBuilder(builder => {
                builder.UseEnvironment("Testing");
            });
    }

    public async Task InitializeAsync()
    {
        _db = _factory.Services.GetRequiredService<AppDbContext>();
        await _db.Database.MigrateAsync(); // apply all migrations
    }

    public async Task DisposeAsync()
    {
        await _db.Database.EnsureDeletedAsync(); // clean up after suite
    }

    [Fact]
    public async Task CreateUser_Returns201_WithValidData()
    {
        var client = _factory.CreateClient();
        var response = await client.PostAsJsonAsync("/users", new { Name = "Alice", Email = "alice@example.com" });
        response.EnsureSuccessStatusCode();
        Assert.Equal(HttpStatusCode.Created, response.StatusCode);
    }
}

EF Core In-Memory Provider — a fake database that lives entirely in process memory. No SQL is generated, no PostgreSQL-specific behavior is exercised.

// Fast unit tests, but dangerous — doesn't test real SQL
services.AddDbContext<AppDbContext>(options =>
    options.UseInMemoryDatabase("TestDb"));

The in-memory provider is a trap. It accepts queries EF Core would reject against a real database. It does not enforce constraints, foreign keys, or uniqueness. It does not test your migrations. For anything beyond the simplest unit tests, LocalDB or a real database is the right choice.


The TypeScript Way

The Problem with In-Memory Fakes

TypeScript has no built-in equivalent to EF Core’s in-memory provider. Mock libraries like jest-mock-extended can mock the Prisma Client, but they have the same fundamental problem as the in-memory provider: they do not test your actual queries, your schema, or your database-specific behavior. Use them for pure unit tests of business logic, not for tests that touch data access.

For integration tests, the TypeScript ecosystem uses real PostgreSQL running in Docker.

Docker Compose for Local Development

The standard setup: a docker-compose.yml in your project root that defines a PostgreSQL container for local development and testing.

# docker-compose.yml
version: '3.9'

services:
  postgres:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      POSTGRES_USER: myapp
      POSTGRES_PASSWORD: myapp
      POSTGRES_DB: myapp_dev
    ports:
      - '5432:5432'
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./scripts/init.sql:/docker-entrypoint-initdb.d/init.sql  # optional seed
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U myapp']
      interval: 5s
      timeout: 5s
      retries: 5

  postgres_test:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      POSTGRES_USER: myapp
      POSTGRES_PASSWORD: myapp
      POSTGRES_DB: myapp_test
    ports:
      - '5433:5432'  # different host port so both can run simultaneously
    tmpfs:
      - /var/lib/postgresql/data  # in-memory storage — fast, no persistence

volumes:
  postgres_data:

The test database uses tmpfs — storage backed by RAM rather than disk. Writes do not survive container restart, which is fine for tests. It is significantly faster than disk-backed storage for write-heavy test workloads.

Start everything:

# Start both services
docker compose up -d

# Wait for postgres to be ready (if your test runner doesn't handle this)
docker compose exec postgres pg_isready -U myapp

Prisma Testing Patterns

Environment configuration for tests:

# .env.test
DATABASE_URL="postgresql://myapp:myapp@localhost:5433/myapp_test"
// package.json — pass test env when running tests
{
  "scripts": {
    "test": "dotenv -e .env.test -- vitest run",
    "test:watch": "dotenv -e .env.test -- vitest",
    "test:integration": "dotenv -e .env.test -- vitest run --reporter=verbose",
    "db:test:migrate": "dotenv -e .env.test -- prisma migrate deploy",
    "db:test:reset": "dotenv -e .env.test -- prisma migrate reset --force"
  }
}

Global test setup — apply migrations once, reset data between tests:

// vitest.config.ts
import { defineConfig } from 'vitest/config';

export default defineConfig({
  test: {
    globalSetup: './tests/setup/global-setup.ts',
    setupFiles: ['./tests/setup/test-setup.ts'],
    environment: 'node',
    testTimeout: 30_000, // integration tests can be slow
  },
});
// tests/setup/global-setup.ts
// Runs once before all test suites
import { execSync } from 'child_process';

export async function setup() {
  // Apply all pending migrations to the test database
  execSync('prisma migrate deploy', {
    env: { ...process.env, DATABASE_URL: process.env.DATABASE_URL },
    stdio: 'inherit',
  });
}

export async function teardown() {
  // Optional: drop test database after all suites complete
}
// tests/setup/test-setup.ts
// Runs before each test file
import { prisma } from '../lib/prisma';

beforeEach(async () => {
  // Reset all data between tests — order matters for foreign keys
  await prisma.$transaction([
    prisma.orderItem.deleteMany(),
    prisma.order.deleteMany(),
    prisma.user.deleteMany(),
  ]);
});

afterAll(async () => {
  await prisma.$disconnect();
});

Truncation vs. deletion:

For large test databases, TRUNCATE with CASCADE and RESTART IDENTITY is faster than deleteMany():

// Faster reset for large datasets
async function resetDatabase(prisma: PrismaClient) {
  const tableNames = await prisma.$queryRaw<{ tablename: string }[]>`
    SELECT tablename FROM pg_tables
    WHERE schemaname = 'public'
    AND tablename NOT LIKE '_prisma_%'
  `;

  const tables = tableNames.map(({ tablename }) => `"${tablename}"`).join(', ');

  await prisma.$executeRawUnsafe(
    `TRUNCATE TABLE ${tables} RESTART IDENTITY CASCADE;`
  );
}

Testcontainers for Isolated Integration Tests

Testcontainers starts a real Docker container programmatically inside your test suite. Each test suite gets its own isolated PostgreSQL instance — no shared state, no port conflicts, no manual Docker management required.

npm install --save-dev @testcontainers/postgresql
// tests/integration/user.test.ts
import { PostgreSqlContainer, StartedPostgreSqlContainer } from '@testcontainers/postgresql';
import { PrismaClient } from '@prisma/client';
import { execSync } from 'child_process';

let container: StartedPostgreSqlContainer;
let prisma: PrismaClient;

beforeAll(async () => {
  // Start a real PostgreSQL container — takes 5-15 seconds on first run
  container = await new PostgreSqlContainer('postgres:16-alpine')
    .withDatabase('testdb')
    .withUsername('test')
    .withPassword('test')
    .start();

  const connectionUrl = container.getConnectionUri();

  // Apply migrations to the fresh database
  execSync('prisma migrate deploy', {
    env: { ...process.env, DATABASE_URL: connectionUrl },
    stdio: 'pipe',
  });

  prisma = new PrismaClient({
    datasources: { db: { url: connectionUrl } },
  });
}, 60_000); // generous timeout for container startup

afterAll(async () => {
  await prisma.$disconnect();
  await container.stop();
});

beforeEach(async () => {
  // Reset between tests within this suite
  await prisma.$transaction([
    prisma.order.deleteMany(),
    prisma.user.deleteMany(),
  ]);
});

describe('User repository', () => {
  test('creates a user and retrieves it by email', async () => {
    await prisma.user.create({
      data: { name: 'Alice', email: 'alice@example.com' },
    });

    const found = await prisma.user.findUnique({
      where: { email: 'alice@example.com' },
    });

    expect(found).not.toBeNull();
    expect(found!.name).toBe('Alice');
  });

  test('enforces unique email constraint', async () => {
    await prisma.user.create({ data: { name: 'Alice', email: 'dupe@example.com' } });

    await expect(
      prisma.user.create({ data: { name: 'Bob', email: 'dupe@example.com' } })
    ).rejects.toThrow(); // Prisma throws PrismaClientKnownRequestError P2002
  });
});

Testcontainers pulls the image from Docker Hub on first use and caches it. Subsequent runs use the cached image and start in 2-5 seconds.

Seeding Test Data with Factories

fishery is the TS equivalent of factory_bot (Ruby) or AutoFixture (C#). It generates objects that match your types, with sensible defaults you can override per test.

npm install --save-dev fishery @faker-js/faker
// tests/factories/user.factory.ts
import { Factory } from 'fishery';
import { faker } from '@faker-js/faker';
import { Prisma } from '@prisma/client';

// Factory for the Prisma create input shape
export const userFactory = Factory.define<Prisma.UserCreateInput>(() => ({
  name: faker.person.fullName(),
  email: faker.internet.email(),
  isActive: true,
  createdAt: faker.date.recent(),
}));

// tests/factories/order.factory.ts
export const orderFactory = Factory.define<Prisma.OrderCreateInput>(({ associations }) => ({
  status: 'PENDING',
  total: parseFloat(faker.commerce.price({ min: 10, max: 500 })),
  customer: associations.customer ?? {
    create: userFactory.build(),
  },
  lineItems: {
    create: [
      {
        productName: faker.commerce.productName(),
        quantity: faker.number.int({ min: 1, max: 5 }),
        unitPrice: parseFloat(faker.commerce.price()),
      },
    ],
  },
}));
// Using factories in tests
test('returns only pending orders for a customer', async () => {
  const customer = await prisma.user.create({
    data: userFactory.build(),
  });

  // Create 3 orders with different statuses
  await prisma.order.createMany({
    data: [
      orderFactory.build({ status: 'PENDING', customer: { connect: { id: customer.id } } }),
      orderFactory.build({ status: 'COMPLETED', customer: { connect: { id: customer.id } } }),
      orderFactory.build({ status: 'PENDING', customer: { connect: { id: customer.id } } }),
    ],
  });

  const pendingOrders = await prisma.order.findMany({
    where: { customerId: customer.id, status: 'PENDING' },
  });

  expect(pendingOrders).toHaveLength(2);
});

Building without persisting — useful for unit tests:

// Build the object without hitting the database
const userData = userFactory.build({ name: 'Specific Name' });
// userData is a plain object matching Prisma.UserCreateInput
// Use it to test pure functions that operate on user-shaped data

Mocking the DB Layer for Unit Tests

For pure unit tests of service logic, mock the Prisma Client rather than running a real database. jest-mock-extended (or vitest-mock-extended) generates type-safe mocks for any TypeScript type.

npm install --save-dev vitest-mock-extended
// tests/unit/user.service.test.ts
import { describe, it, expect, vi } from 'vitest';
import { mockDeep, mockReset } from 'vitest-mock-extended';
import { PrismaClient } from '@prisma/client';
import { UserService } from '../../src/user.service';

const prismaMock = mockDeep<PrismaClient>();

beforeEach(() => {
  mockReset(prismaMock);
});

describe('UserService', () => {
  it('returns null when user does not exist', async () => {
    prismaMock.user.findUnique.mockResolvedValue(null);

    const service = new UserService(prismaMock);
    const result = await service.findById('nonexistent-id');

    expect(result).toBeNull();
    expect(prismaMock.user.findUnique).toHaveBeenCalledWith({
      where: { id: 'nonexistent-id' },
    });
  });

  it('throws when creating a user with duplicate email', async () => {
    prismaMock.user.create.mockRejectedValue(
      new Error('Unique constraint failed on the fields: (`email`)')
    );

    const service = new UserService(prismaMock);
    await expect(service.createUser({ name: 'Alice', email: 'exists@example.com' }))
      .rejects.toThrow();
  });
});

The mock is fully typed — prismaMock.user.findUnique has the same signature as the real Prisma method. TypeScript catches calls with wrong arguments.


Key Differences

Concern.NET (xUnit + LocalDB/EF)TypeScript (Vitest + Docker/Testcontainers)
Real database engineSQL Server LocalDBPostgreSQL in Docker
In-memory optionEF Core InMemoryProviderMock PrismaClient (jest-mock-extended)
Migration applicationDatabase.MigrateAsync()prisma migrate deploy via execSync
Test isolationEnsureDeletedAsync() per suitedeleteMany() or TRUNCATE ... CASCADE per test
Test container per testrespawn in EF Core@testcontainers/postgresql
Object factoriesAutoFixture, Bogusfishery + @faker-js/faker
DI overrideWebApplicationFactory.WithWebHostBuilderDirect service instantiation or inversify
Test runnerxUnit, NUnit, MSTestVitest (preferred), Jest

Gotchas for .NET Engineers

1. Prisma does not apply migrations automatically in tests

EF Core’s Database.MigrateAsync() is one method call that applies all pending migrations. Prisma has no equivalent inside the PrismaClient API. You must call prisma migrate deploy via a shell command before tests run. Forgetting this means your tests run against a stale or empty schema and produce confusing errors.

// global-setup.ts — do not skip this
import { execSync } from 'child_process';

export async function setup() {
  execSync('prisma migrate deploy', {
    env: { ...process.env, DATABASE_URL: process.env.DATABASE_URL! },
    stdio: 'inherit',
  });
  console.log('Migrations applied to test database');
}

If you use Testcontainers, you must run migrations against the container’s connection URL, not the one in your environment:

execSync('prisma migrate deploy', {
  env: { ...process.env, DATABASE_URL: container.getConnectionUri() },
  stdio: 'pipe',
});

2. deleteMany ordering matters — foreign key violations will stop your reset

EF Core’s EnsureDeletedAsync() drops the entire database, bypassing all constraint checks. Prisma’s deleteMany() deletes rows from a live table. If you delete a parent table before its child tables, PostgreSQL will throw a foreign key violation.

// WRONG — deletes users before orders that reference them
await prisma.$transaction([
  prisma.user.deleteMany(),   // FK violation: orders still reference users
  prisma.order.deleteMany(),
]);

// CORRECT — delete children first, then parents
await prisma.$transaction([
  prisma.orderItem.deleteMany(),
  prisma.order.deleteMany(),
  prisma.user.deleteMany(),
]);

// Or use raw TRUNCATE with CASCADE (simpler for complex schemas)
await prisma.$executeRaw`TRUNCATE TABLE "OrderItem", "Order", "User" CASCADE`;

3. Testcontainer startup time requires a long timeout

xUnit’s test collection setup is synchronous; a 5-10 second container startup is unremarkable. Vitest’s beforeAll is async, but the default test timeout is short (5 seconds in some configurations). Container startup easily exceeds this.

Always set an explicit timeout on the beforeAll that starts your container:

beforeAll(async () => {
  container = await new PostgreSqlContainer().start();
  // ... migrate, connect ...
}, 60_000); // 60 seconds is safe; first pull takes longer

Also set testTimeout in vitest.config.ts to a value that accommodates your test durations.

4. The EF Core in-memory provider has no PostgreSQL equivalent — do not reach for a mock when you should use Docker

A common mistake is to use mockDeep<PrismaClient>() for every test, avoiding Docker entirely. This mirrors the mistake of using EF Core’s InMemoryProvider for integration tests. Mock-based tests do not verify that your Prisma queries produce correct SQL. They do not catch N+1 bugs, missing indexes, constraint violations, or schema drift. Use mocks only for unit tests of business logic that does not touch the database.

A reasonable split:

  • Unit tests (fast, no Docker): mock PrismaClient, test service and domain logic in isolation.
  • Integration tests (slower, require Docker): use @testcontainers/postgresql or the shared Docker Compose test database, test real queries end to end.

5. faker data can violate unique constraints unless you scope it

faker.internet.email() generates realistic-looking emails, but with a finite seed they repeat. In a test suite with many test cases, you will eventually collide.

// Fragile — same email could appear twice across test runs
const email = faker.internet.email();

// Robust — prefix with a unique identifier
const email = `test-${crypto.randomUUID()}@example.com`;

// Or use fishery's sequence counter
export const userFactory = Factory.define<Prisma.UserCreateInput>(({ sequence }) => ({
  email: `user-${sequence}@example.com`, // guaranteed unique within this factory
  name: faker.person.fullName(),
}));

Hands-On Exercise

Set up a complete integration testing environment for a simple blog API with Post and Comment entities.

Schema to implement:

model User {
  id       String    @id @default(cuid())
  email    String    @unique
  name     String
  posts    Post[]
}

model Post {
  id        String    @id @default(cuid())
  title     String
  content   String
  published Boolean   @default(false)
  authorId  String
  author    User      @relation(fields: [authorId], references: [id])
  comments  Comment[]
  createdAt DateTime  @default(now())
}

model Comment {
  id        String   @id @default(cuid())
  content   String
  postId    String
  post      Post     @relation(fields: [postId], references: [id])
  createdAt DateTime @default(now())
}

Tasks:

  1. Write a docker-compose.yml with a postgres_test service using tmpfs storage on port 5433.
  2. Write vitest.config.ts with globalSetup that applies migrations before all tests.
  3. Write beforeEach cleanup that resets data in the correct order.
  4. Write factories for User, Post, and Comment using fishery and faker.
  5. Write an integration test that verifies: fetching a published post includes its comments and author, and fetching an unpublished post returns null for a non-author.
  6. Write a unit test using mockDeep<PrismaClient>() that tests a PostService.publish() method throws when the post does not exist.

Quick Reference

TaskCommand / Code
Start test DBdocker compose up -d postgres_test
Apply migrations to test DBdotenv -e .env.test -- prisma migrate deploy
Reset test DBdotenv -e .env.test -- prisma migrate reset --force
Generate Prisma clientprisma generate
Run integration testsdotenv -e .env.test -- vitest run
Pull Testcontainers imageAutomatic on first run

Testcontainers minimal setup:

import { PostgreSqlContainer } from '@testcontainers/postgresql';

const container = await new PostgreSqlContainer('postgres:16-alpine')
  .withDatabase('testdb')
  .withUsername('test')
  .withPassword('test')
  .start();

const url = container.getConnectionUri();
// Use url for DATABASE_URL in prisma migrate deploy and PrismaClient
await container.stop(); // in afterAll

fishery factory pattern:

import { Factory } from 'fishery';
import { faker } from '@faker-js/faker';
import { Prisma } from '@prisma/client';

export const entityFactory = Factory.define<Prisma.EntityCreateInput>(({ sequence }) => ({
  field: faker.word.noun(),
  uniqueField: `value-${sequence}`,
}));

// Build without saving
const obj = entityFactory.build({ field: 'override' });

// Create in database
const created = await entityFactory.create({ field: 'override' }, { transient: { prisma } });

Vitest config for integration tests:

// vitest.config.ts
import { defineConfig } from 'vitest/config';

export default defineConfig({
  test: {
    globalSetup: './tests/setup/global-setup.ts',
    setupFiles: ['./tests/setup/test-setup.ts'],
    testTimeout: 30_000,
    hookTimeout: 60_000, // generous for container startup
    poolOptions: {
      threads: {
        singleThread: true, // avoid parallel DB access conflicts in integration tests
      },
    },
  },
});

Further Reading

Data Access Patterns: Repository Pattern and Unit of Work

For .NET engineers who know: The Repository pattern, Unit of Work, EF Core’s DbContext as a built-in UoW, and why SaveChanges() exists You’ll learn: When wrapping Prisma in a Repository adds value, when it is unnecessary overhead, and how transactions replace the Unit of Work in TypeScript Time: 10-15 min


The .NET Way (What You Already Know)

In .NET, the Repository pattern and Unit of Work (UoW) are staples of the enterprise architecture playbook. EF Core’s DbContext is itself an implementation of the Unit of Work pattern. The DbSet<T> properties on a DbContext are in-memory repositories. SaveChanges() commits everything the UoW tracked.

// The "classic" pattern: wrap DbContext in explicit repository abstractions
public interface IUserRepository
{
    Task<User?> FindByIdAsync(Guid id);
    Task<IReadOnlyList<User>> FindActiveAsync();
    void Add(User user);
    void Remove(User user);
}

public interface IUnitOfWork
{
    IUserRepository Users { get; }
    IOrderRepository Orders { get; }
    Task<int> SaveChangesAsync(CancellationToken ct = default);
}

public class EfUnitOfWork : IUnitOfWork
{
    private readonly AppDbContext _db;

    public EfUnitOfWork(AppDbContext db) { _db = db; }

    public IUserRepository Users => new EfUserRepository(_db);
    public IOrderRepository Orders => new EfOrderRepository(_db);

    public Task<int> SaveChangesAsync(CancellationToken ct = default) =>
        _db.SaveChangesAsync(ct);
}

// A service that uses it
public class OrderService
{
    private readonly IUnitOfWork _uow;

    public OrderService(IUnitOfWork uow) { _uow = uow; }

    public async Task PlaceOrderAsync(Guid userId, List<OrderItemDto> items)
    {
        var user = await _uow.Users.FindByIdAsync(userId)
            ?? throw new NotFoundException($"User {userId} not found");

        var order = Order.Create(user, items);
        _uow.Orders.Add(order);

        await _uow.SaveChangesAsync(); // single transaction: user lookup + order insert
    }
}

Why did the pattern become so prevalent? Several reasons:

  1. TestabilityIUserRepository is mockable. DbContext directly is not.
  2. Decoupling — services depend on interfaces, not on EF Core. You could (theoretically) swap ORMs.
  3. Explicit transactions — the UoW groups multiple operations into one atomic commit.
  4. SaveChanges() semantics — EF Core’s change tracking requires SaveChanges() to flush. The UoW pattern gives that a named home.

All of these motivations are real. But some of them evaporate in the Prisma/NestJS world.


The TypeScript Way

Why the UoW Pattern Mostly Disappears

The Unit of Work exists in EF Core because EF Core tracks changes. You load an entity, mutate it, and SaveChanges() detects and flushes the diff. Without change tracking, there is no “unit” to save. Every Prisma operation is immediately explicit — there is nothing to accumulate and flush.

This eliminates the primary mechanical reason for the UoW abstraction. What remains is transaction management, and Prisma handles that directly.

Prisma $transaction — The Replacement for SaveChanges + UoW

// Sequential transaction — operations run one after another, sharing a connection
const [user, order] = await prisma.$transaction([
  prisma.user.findUniqueOrThrow({ where: { id: userId } }),
  prisma.order.create({
    data: {
      customerId: userId,
      status: 'PENDING',
      total: items.reduce((sum, item) => sum + item.price * item.quantity, 0),
      lineItems: { create: items },
    },
  }),
]);
// EF Core equivalent — implicit transaction via SaveChanges
var user = await _db.Users.FindAsync(userId) ?? throw new NotFoundException();
var order = new Order { CustomerId = userId, Status = OrderStatus.Pending };
order.AddItems(items);
_db.Orders.Add(order);
await _db.SaveChangesAsync(); // atomic: both the order + line items committed together

The sequential transaction API works for simple cases. For complex multi-step logic, use the interactive transaction:

// Interactive transaction — explicit control, single shared connection
await prisma.$transaction(async (tx) => {
  // tx is a Prisma client scoped to this transaction
  const user = await tx.user.findUniqueOrThrow({ where: { id: userId } });

  if (!user.isActive) {
    throw new Error('Cannot place order for inactive user');
  }

  const inventory = await tx.product.findMany({
    where: { id: { in: items.map((i) => i.productId) } },
    select: { id: true, stock: true, price: true },
  });

  for (const item of items) {
    const product = inventory.find((p) => p.id === item.productId);
    if (!product || product.stock < item.quantity) {
      throw new Error(`Insufficient stock for product ${item.productId}`);
    }
  }

  // Decrement inventory
  await Promise.all(
    items.map((item) =>
      tx.product.update({
        where: { id: item.productId },
        data: { stock: { decrement: item.quantity } },
      })
    )
  );

  // Create order
  const order = await tx.order.create({
    data: {
      customerId: userId,
      status: 'PENDING',
      total: items.reduce((sum, item) => {
        const product = inventory.find((p) => p.id === item.productId)!;
        return sum + product.price * item.quantity;
      }, 0),
      lineItems: { create: items },
    },
  });

  return order;
});

If any statement inside the callback throws, Prisma rolls back the entire transaction. This replaces both SaveChanges() and the UoW boundary.

Transaction options:

await prisma.$transaction(async (tx) => {
  // ...
}, {
  maxWait: 5000,   // ms to wait for a connection from the pool
  timeout: 10_000, // ms before the transaction is automatically rolled back
  isolationLevel: Prisma.TransactionIsolationLevel.Serializable,
});

The NestJS Service Pattern — Prisma IS Your Repository

In a standard NestJS application, services use Prisma directly. The service is a thin wrapper around Prisma queries. This is not a lack of architecture — it is a deliberate choice to avoid indirection that provides no value.

// users/user.service.ts
import { Injectable, NotFoundException } from '@nestjs/common';
import { PrismaService } from '../prisma/prisma.service';
import { CreateUserDto } from './dto/create-user.dto';
import { UpdateUserDto } from './dto/update-user.dto';

@Injectable()
export class UserService {
  constructor(private readonly prisma: PrismaService) {}

  async findById(id: string) {
    const user = await this.prisma.user.findUnique({ where: { id } });
    if (!user) throw new NotFoundException(`User ${id} not found`);
    return user;
  }

  async findAll(isActive?: boolean) {
    return this.prisma.user.findMany({
      where: isActive !== undefined ? { isActive } : undefined,
      orderBy: { createdAt: 'desc' },
    });
  }

  async create(dto: CreateUserDto) {
    return this.prisma.user.create({ data: dto });
  }

  async update(id: string, dto: UpdateUserDto) {
    return this.prisma.user.update({ where: { id }, data: dto });
  }

  async delete(id: string) {
    return this.prisma.user.delete({ where: { id } });
  }
}
// prisma/prisma.service.ts
import { Injectable, OnModuleInit } from '@nestjs/common';
import { PrismaClient } from '@prisma/client';

@Injectable()
export class PrismaService extends PrismaClient implements OnModuleInit {
  async onModuleInit() {
    await this.$connect();
  }
  // onModuleDestroy is handled by NestJS lifecycle hooks
}
// users/user.module.ts
import { Module } from '@nestjs/common';
import { UserService } from './user.service';
import { UserController } from './user.controller';
import { PrismaModule } from '../prisma/prisma.module';

@Module({
  imports: [PrismaModule],
  providers: [UserService],
  controllers: [UserController],
  exports: [UserService],
})
export class UserModule {}

Compare with the C# equivalent. The NestJS service plays the role of the repository. Prisma is the data layer. There is no intermediate interface.

// C# — what the NestJS pattern collapses into one class
public class UserService
{
    private readonly AppDbContext _db;  // Prisma fills this role
    public UserService(AppDbContext db) { _db = db; }

    public async Task<User?> FindById(Guid id) => await _db.Users.FindAsync(id);
    public async Task<List<User>> FindAll() => await _db.Users.ToListAsync();
    // ...
}

When a Repository Wrapper Does Add Value

The rule of thumb: add a Repository abstraction when you have a specific reason, not by default.

Reason 1: You need to swap data sources

If a feature might be backed by PostgreSQL today and an external API or Redis tomorrow, a repository interface isolates the swap.

// Useful abstraction — implementation can change without touching the service
interface FeatureFlagRepository {
  isEnabled(flag: string, userId: string): Promise<boolean>;
}

// One implementation backed by Prisma
class PrismaFeatureFlagRepository implements FeatureFlagRepository {
  constructor(private readonly prisma: PrismaService) {}

  async isEnabled(flag: string, userId: string): Promise<boolean> {
    const record = await this.prisma.featureFlag.findFirst({
      where: { name: flag, userId },
    });
    return record?.enabled ?? false;
  }
}

// Another backed by LaunchDarkly
class LaunchDarklyFeatureFlagRepository implements FeatureFlagRepository {
  async isEnabled(flag: string, userId: string): Promise<boolean> {
    return launchDarkly.variation(flag, { key: userId }, false);
  }
}

Reason 2: Complex query logic that benefits from a dedicated home

If you have ten-line Prisma queries that are reused across multiple services, extracting them into a named method on a Repository class keeps services readable.

// OrderRepository — justified by query complexity and reuse
@Injectable()
export class OrderRepository {
  constructor(private readonly prisma: PrismaService) {}

  async findOrdersWithRevenueSummary(customerId: string, dateRange: DateRange) {
    return this.prisma.order.groupBy({
      by: ['status'],
      _count: { id: true },
      _sum: { total: true },
      where: {
        customerId,
        createdAt: { gte: dateRange.from, lte: dateRange.to },
      },
    });
  }

  async findPaginatedWithDetails(
    filters: OrderFilters,
    cursor?: string,
    take = 20,
  ) {
    return this.prisma.order.findMany({
      where: {
        status: filters.status,
        customerId: filters.customerId,
        createdAt: filters.dateRange
          ? { gte: filters.dateRange.from, lte: filters.dateRange.to }
          : undefined,
      },
      include: { customer: true, lineItems: { include: { product: true } } },
      orderBy: { createdAt: 'desc' },
      take,
      cursor: cursor ? { id: cursor } : undefined,
      skip: cursor ? 1 : 0,
    });
  }
}

Reason 3: You need to test service logic in isolation without a database

If your service has non-trivial business logic and you want fast unit tests that do not spin up Docker, a Repository interface lets you mock data access.

// Interface defined for testability
interface UserRepository {
  findById(id: string): Promise<User | null>;
  save(user: User): Promise<User>;
}

// In the test — mock the interface, not PrismaClient
const mockUserRepo: UserRepository = {
  findById: vi.fn().mockResolvedValue(null),
  save: vi.fn(),
};

const service = new UserService(mockUserRepo);

When this reason applies, you are usually better served by vitest-mock-extended to mock PrismaService directly (see the previous article) — the interface is an extra layer unless you genuinely need it.


Key Differences

ConcernEF Core + UoW patternPrisma in NestJS
Change trackingYes — mutations auto-detectedNo — every update is explicit
Unit of WorkDbContext.SaveChanges()prisma.$transaction([...]) or callback
Repository abstractionIRepository<T> over DbSet<T>Usually none — PrismaService is injected directly
When to add RepositoryCommon (testability, decoupling)Only when you have a specific reason
Transaction scopeImplicit UoW boundaryExplicit $transaction callback
Isolation levelsDatabase.BeginTransactionAsync(IsolationLevel.X)prisma.$transaction(fn, { isolationLevel })
Service layer roleOrchestrates repos, UoWOwns queries directly via Prisma
DI lifetimeScoped DbContextGlobal PrismaService

Gotchas for .NET Engineers

1. There is no SaveChanges — every mutation must be explicit

This is the most disorienting shift. In EF Core, you can load entities, call methods on them, and SaveChanges() sorts it out. There is no equivalent in Prisma. If you do not call prisma.user.update(...), nothing is persisted, no matter what you do to the object in memory.

// This persists nothing
const user = await prisma.user.findUnique({ where: { id } });
user!.name = 'New Name'; // mutates the JS object only
// No SaveChanges equivalent. The database is unchanged.

// This persists the change
await prisma.user.update({
  where: { id },
  data: { name: 'New Name' },
});

2. Interactive transactions have a default timeout of 5 seconds

EF Core transactions time out based on the CommandTimeout you configure, which is often 30 seconds or more. Prisma’s interactive transaction defaults to 5 seconds (timeout: 5000). If your transaction involves slow queries or multiple round trips, it will roll back before completing.

// Increase for slow or complex transactions
await prisma.$transaction(async (tx) => {
  // multi-step logic...
}, {
  timeout: 15_000,  // 15 seconds
  maxWait: 5_000,   // wait up to 5s for a connection
});

3. Wrapping PrismaService in a generic Repository<T> buys almost nothing

A common reflex from .NET is to create a Repository<T> base class that wraps CRUD operations:

// Looks familiar but adds no value in Prisma
class Repository<T> {
  constructor(private readonly model: any) {}
  findById(id: string) { return this.model.findUnique({ where: { id } }); }
  findAll() { return this.model.findMany(); }
  create(data: any) { return this.model.create({ data }); }
  update(id: string, data: any) { return this.model.update({ where: { id }, data }); }
  delete(id: string) { return this.model.delete({ where: { id } }); }
}

This destroys type safety — model: any is untyped, and Prisma’s per-model client is already a well-typed, model-specific API. You lose Prisma’s type inference and gain nothing. Prisma’s user, order, etc. model clients are already thin, typed repositories. Do not wrap them in another layer unless the abstraction carries specific value.

4. Passing tx (the transaction client) through the call stack is manual work

EF Core’s transaction is implicit once you call Database.BeginTransactionAsync(). Any DbContext operation within that scope is automatically in the transaction. Prisma’s interactive transaction gives you a scoped tx client that you must pass explicitly to every operation in the transaction.

If your transaction spans multiple service methods, each method needs to accept tx as an optional parameter:

async function processOrder(
  userId: string,
  items: OrderItem[],
  tx?: Prisma.TransactionClient
): Promise<Order> {
  const client = tx ?? prisma; // use tx if in a transaction, prisma otherwise
  return client.order.create({ data: { ... } });
}

// Calling with a transaction
await prisma.$transaction(async (tx) => {
  await checkInventory(items, tx);    // passes tx down
  await processOrder(userId, items, tx); // passes tx down
  await notifyWarehouse(items, tx);   // passes tx down
});

This is more verbose than EF Core’s ambient transaction model, but it is explicit and easy to trace.


Hands-On Exercise

You are reviewing a NestJS PR that adds a full Repository + Unit of Work abstraction over Prisma for a small CRUD application with five models: User, Post, Comment, Tag, and PostTag. The PR adds:

  • IRepository<T> base interface with findById, findAll, create, update, delete
  • Five repository classes implementing it
  • IUnitOfWork interface with five repository properties and a commit() method
  • PrismaUnitOfWork class implementing IUnitOfWork
  • All five services updated to use IUnitOfWork instead of PrismaService

Tasks:

  1. List three concrete concerns this abstraction solves for this application. If you cannot list three, note which arguments are weaker than they appear.

  2. Rewrite PostService.publishPost(postId, authorId) in two versions:

    • Version A: using IUnitOfWork as described in the PR
    • Version B: using PrismaService directly with $transaction Compare verbosity, type safety, and testability.
  3. The PR’s commit() method wraps prisma.$transaction([...]) with the operations accumulated since the last commit. What does this break that Prisma’s interactive transaction handles correctly?

  4. Write a unit test for PostService.publishPost in both versions. Which version is easier to test, and why?

  5. Give a verdict: accept, request changes, or reject. State the one scenario where the abstraction would become justified, and what change to the application would trigger that scenario.


Quick Reference

EF CorePrisma equivalentNotes
_db.SaveChangesAsync()prisma.$transaction([...])No implicit accumulation — all ops explicit
IUnitOfWork.commit()prisma.$transaction(fn)Callback form supports conditional logic
IRepository<T>prisma.modelNameAlready a typed, model-specific API
Database.BeginTransactionAsync()prisma.$transaction(async tx => {...})tx must be passed explicitly through the call stack
IsolationLevel.Serializable{ isolationLevel: Prisma.TransactionIsolationLevel.Serializable }Option on $transaction
Rollback on exceptionAutomatic inside $transaction callbackSame as EF Core — any throw rolls back
DbContext Scoped lifetimePrismaService singletonPrismaService is global; no per-request instance needed
IRepository<T> for mockingvi.mock() or mockDeep<PrismaService>()Mock PrismaService directly; no interface needed

PrismaService for NestJS (canonical pattern):

// prisma/prisma.service.ts
import { Injectable, OnModuleInit } from '@nestjs/common';
import { PrismaClient } from '@prisma/client';

@Injectable()
export class PrismaService extends PrismaClient implements OnModuleInit {
  async onModuleInit() {
    await this.$connect();
  }
}

// prisma/prisma.module.ts
import { Global, Module } from '@nestjs/common';

@Global()  // makes PrismaService available everywhere without importing PrismaModule
@Module({
  providers: [PrismaService],
  exports: [PrismaService],
})
export class PrismaModule {}

Transaction pattern for cross-service operations:

// Pass Prisma.TransactionClient as optional param for composability
async function operationA(
  data: AData,
  tx?: Prisma.TransactionClient,
): Promise<A> {
  const client = tx ?? prisma;
  return client.a.create({ data });
}

async function operationB(
  data: BData,
  tx?: Prisma.TransactionClient,
): Promise<B> {
  const client = tx ?? prisma;
  return client.b.create({ data });
}

// Compose atomically
await prisma.$transaction(async (tx) => {
  const a = await operationA(aData, tx);
  const b = await operationB({ ...bData, aId: a.id }, tx);
  return { a, b };
});

Further Reading

Git & GitHub CLI: From TFS/Azure DevOps to GitHub

For .NET engineers who know: TFS, Azure DevOps Repos, Visual Studio Git integration, Azure DevOps pull requests You’ll learn: How to operate Git and GitHub entirely from the terminal, including our team’s branch strategy, commit format, and code review workflow Time: 15-20 minutes


The .NET Way (What You Already Know)

In Azure DevOps, your workflow probably looked like this:

  • You cloned a repo through Visual Studio or the Azure DevOps web UI
  • Branches were created through a GUI with a ticket number prefix
  • Pull Requests were reviewed in the Azure DevOps web portal
  • Pipelines ran automatically on PR or merge
  • Work Items were linked to commits and PRs through the UI

If you used TFS, changesets were the unit of work. Shelvesets held work-in-progress. Branching was expensive and discouraged. Merging was done through a GUI and felt dangerous.

Git fundamentally changed this — branching is free, history is local, and the terminal is the canonical interface. Most experienced JS engineers never open a GUI for Git.


The GitHub Way

Core Concepts Mapped

Azure DevOps / TFS ConceptGit / GitHub Equivalent
RepositoryRepository (same)
ChangesetCommit
Shelvesetgit stash
Branch policyBranch protection rules
Pull RequestPull Request (same concept, different UI)
Work Item linkIssue reference in commit/PR body
Build pipelineGitHub Actions workflow
Service connectionGitHub Secret
Variable groupRepository/Organization secrets
tf getgit pull --rebase
tf checkingit commit + git push
Code review in ADO portalgh pr review or GitHub web

Our Git Configuration Baseline

Before anything else, configure Git properly:

git config --global user.name "Your Name"
git config --global user.email "you@example.com"
git config --global core.editor "code --wait"
git config --global pull.rebase true
git config --global init.defaultBranch main
git config --global rebase.autoStash true

The pull.rebase true setting means git pull automatically rebases instead of creating a merge commit. This keeps history linear — the same reason you’d use “Rebase and fast-forward” in Azure DevOps.


Essential Commands You Need Daily

Checking state:

git status                    # What's changed?
git diff                      # Unstaged changes
git diff --staged             # Staged changes (what will commit)
git log --oneline --graph     # Visual history
git log --oneline -10         # Last 10 commits

Branching:

git checkout -b feature/my-feature    # Create and switch
git switch -c feature/my-feature      # Modern syntax (same result)
git branch -a                         # All branches including remote
git branch -d feature/done            # Delete merged branch
git branch -D feature/abandoned       # Force delete

Staging and committing:

git add src/components/Button.tsx     # Stage specific file
git add -p                            # Interactive staging (hunk by hunk)
git commit -m "feat(auth): add JWT refresh logic"
git commit --amend                    # Fix last commit message or add files

Synchronizing:

git fetch origin                      # Download remote changes, don't apply
git pull                              # Fetch + rebase (with our config)
git push origin feature/my-feature    # Push branch
git push -u origin HEAD               # Push current branch, set upstream

Git Stash: The Shelveset Equivalent

Stash saves your work-in-progress without committing:

git stash                             # Stash all tracked changes
git stash push -m "WIP: auth refactor"  # Named stash
git stash list                        # See all stashes
git stash pop                         # Apply most recent stash, then delete it
git stash apply stash@{2}             # Apply specific stash, keep it
git stash drop stash@{2}              # Delete specific stash
git stash branch feature/new stash@{0}  # Create branch from stash

When to use it: You’re mid-feature and need to pull, review something, or hot-fix another branch. Stash your work, switch contexts, come back.


Rebase: The Clean History Tool

Rebase rewrites commit history by replaying your commits on top of another branch. Think of it as “pretend I started this work from a later point.”

# Update your feature branch with latest main
git fetch origin
git rebase origin/main

# If conflicts occur:
git rebase --continue    # After resolving
git rebase --abort       # Cancel and go back to pre-rebase state

# Interactive rebase: rewrite the last 3 commits
git rebase -i HEAD~3

Interactive rebase (-i) opens an editor with your commits listed:

pick a1b2c3 feat: add user model
pick d4e5f6 fix typo
pick g7h8i9 feat: add user validation

# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# s, squash = meld into previous commit
# f, fixup = meld into previous commit, discard log message
# d, drop = remove commit

Common uses:

  • squash multiple WIP commits before PR
  • reword to fix a commit message
  • drop to remove a commit entirely
  • fixup to silently fold a typo-fix into the previous commit

Cherry-Pick: Bring One Commit Anywhere

Cherry-pick applies a specific commit to a different branch — like cherry-picking a single changeset.

git log --oneline feature/auth        # Find the commit hash
# a1b2c3 feat: add password reset endpoint

git checkout main
git cherry-pick a1b2c3               # Apply that one commit here
git cherry-pick a1b2c3..g7h8i9       # Range of commits
git cherry-pick -n a1b2c3            # Stage but don't commit (--no-commit)

Use case: A hotfix was committed to a feature branch and you need it on main immediately without merging the whole feature branch.


Bisect: Binary Search Through History

git bisect finds the commit that introduced a bug using binary search. You mark commits as “good” or “bad” and Git narrows it down.

git bisect start
git bisect bad                        # Current commit is broken
git bisect good v1.2.0                # This tag worked fine

# Git checks out a middle commit. Test it, then:
git bisect good                       # This one works
# or
git bisect bad                        # This one doesn't

# Git keeps narrowing until it finds the culprit.
# When done:
git bisect reset                      # Return to original HEAD

You can also automate it with a test script:

git bisect run npm test               # Runs test suite at each commit

Branch Naming Convention

Our branches follow this pattern:

<type>/<ticket-or-short-description>

feat/user-auth
fix/login-redirect-loop
chore/update-dependencies
docs/api-endpoints
refactor/extract-auth-middleware
test/user-service-coverage

Types mirror conventional commit types (described below). The ticket ID goes first if there is one:

feat/GH-142-user-auth
fix/GH-89-login-redirect

Conventional Commits

Our commit format follows the Conventional Commits specification:

<type>(<scope>): <short description>

[optional body]

[optional footer(s)]

Types:

TypeWhen to Use
featNew feature
fixBug fix
choreMaintenance, dependency updates
docsDocumentation only
refactorCode change that neither fixes a bug nor adds a feature
testAdding or fixing tests
styleFormatting, whitespace (no logic changes)
perfPerformance improvement
ciCI/CD configuration
buildBuild system changes

Examples:

feat(auth): add JWT refresh token rotation

fix(api): handle null response from payment gateway

chore(deps): upgrade Prisma to 5.12.0

refactor(user-service): extract email validation to shared util

feat!: drop support for Node 18

BREAKING CHANGE: minimum Node version is now 20

The ! suffix and BREAKING CHANGE footer signal a breaking change — equivalent to a major version bump in semantic versioning.


The GitHub CLI (gh)

gh is the official GitHub CLI. Install it:

# macOS
brew install gh

# Authenticate
gh auth login

Pull Requests:

gh pr create                          # Interactive PR creation
gh pr create --title "feat: add auth" --body "Closes #42"
gh pr create --draft                  # Draft PR
gh pr list                            # PRs in current repo
gh pr view 123                        # View PR #123
gh pr checkout 123                    # Check out PR branch locally
gh pr review 123 --approve            # Approve
gh pr review 123 --request-changes --body "See inline comments"
gh pr merge 123 --squash              # Merge with squash
gh pr merge 123 --rebase              # Merge with rebase
gh pr close 123                       # Close without merging

Issues:

gh issue list                         # All open issues
gh issue view 42                      # View issue #42
gh issue create --title "Bug: login fails" --body "Steps: ..."
gh issue close 42
gh issue comment 42 --body "Fixed in #57"

Repositories:

gh repo clone org/repo                # Clone repo
gh repo view                          # View current repo in browser
gh repo fork                          # Fork current repo

Checks and Actions:

gh run list                           # Recent workflow runs
gh run view 12345678                  # View specific run
gh run watch                          # Watch current run live
gh workflow list                      # All workflows
gh workflow run deploy.yml            # Manually trigger workflow

Searching across PRs:

gh pr list --search "is:open assignee:@me"
gh pr list --search "label:needs-review"
gh issue list --search "milestone:v2.0"

Our Code Review Workflow

  1. Push your branch and open a PR:
git push -u origin HEAD
gh pr create --title "feat(auth): add password reset" --body "$(cat <<'EOF'
## Summary
- Adds /auth/reset-password endpoint
- Sends reset email via SendGrid
- Token expires in 1 hour

## Testing
- [ ] Tested happy path locally
- [ ] Tested expired token case
- [ ] Added unit tests for token generation

Closes #89
EOF
)"
  1. Request reviews:
gh pr edit --add-reviewer alice,bob
  1. Respond to review comments — push additional commits, then:
gh pr review --comment --body "All comments addressed in latest push"
  1. Merge strategy: we use squash merge for feature branches to keep main history clean. Hotfixes use rebase merge.
gh pr merge --squash --delete-branch

.gitignore for Node.js

Create a comprehensive .gitignore at the project root:

# Dependencies
node_modules/
.pnp
.pnp.js

# Build outputs
dist/
build/
out/
.next/
.nuxt/
.output/

# TypeScript
*.tsbuildinfo

# Environment files
.env
.env.local
.env.*.local

# Logs
npm-debug.log*
yarn-debug.log*
pnpm-debug.log*
*.log

# Editor
.vscode/
!.vscode/extensions.json
!.vscode/settings.json
.idea/
*.swp
*.swo

# OS
.DS_Store
Thumbs.db

# Testing
coverage/
.nyc_output/

# Turbo
.turbo/

# Misc
.cache/
*.local

PR Template

Create .github/pull_request_template.md in your repo:

## Summary
<!-- What does this PR do? Why? -->

## Changes
-

## Testing
- [ ] Unit tests pass (`npm test`)
- [ ] Type check passes (`npm run typecheck`)
- [ ] Lint passes (`npm run lint`)
- [ ] Tested in browser / local environment

## Screenshots
<!-- If UI changes, before/after screenshots -->

## Related Issues
Closes #

Trunk-Based Development vs GitFlow

GitFlow (what many Azure DevOps teams use):

  • main / develop / feature/* / release/* / hotfix/*
  • Long-lived develop branch
  • Formal release branches
  • Works well for scheduled releases with long QA cycles

Trunk-Based Development (what we use):

  • Only main is long-lived
  • Feature branches are short-lived (1-3 days max)
  • Feature flags control incomplete work in production
  • Merges to main trigger deployment

We use trunk-based development because it:

  • Eliminates merge hell from long-lived branches
  • Forces smaller, reviewable PRs
  • Keeps the deployment pipeline always green
  • Works naturally with feature flags
# Start work
git switch -c feat/GH-142-user-auth

# Commit often — multiple small commits while working
git commit -m "feat(auth): add user model"
git commit -m "feat(auth): add login endpoint"
git commit -m "test(auth): add login unit tests"

# Before PR: squash into meaningful commits
git rebase -i origin/main

# Push and PR
git push -u origin HEAD
gh pr create

GitHub Actions Basics

Actions are covered in detail in article 6.2, but for Git context: a basic workflow that runs on every PR looks like this:

# .github/workflows/ci.yml
name: CI
on:
  pull_request:
    branches: [main]
  push:
    branches: [main]

jobs:
  check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
      - run: npm ci
      - run: npm run lint
      - run: npm run typecheck
      - run: npm test

The branch protection rules in GitHub settings can require this workflow to pass before a PR can be merged — equivalent to Azure DevOps branch policies requiring a passing build.


Key Differences

ConceptAzure DevOpsGitHub
Default merge strategyMerge commitConfigurable per repo (we use Squash)
PR checksBuild validation policiesRequired status checks
Code ownersCode reviewer policiesCODEOWNERS file
Branch policiesBranch policies UIBranch protection rules
Personal access tokensPATs in ADOPATs or GitHub Apps
CLI toolingaz devops / tfgh
NotificationsADO notification settingsGitHub notification settings
WikiAzure DevOps WikiGitHub Wiki or repo docs

Gotchas for .NET Engineers

Gotcha 1: git pull creates ugly merge commits by default. Out of the box, git pull does fetch + merge, creating a merge commit even when you’re just syncing with the remote. This clutters history. Configure pull.rebase true globally (shown above) or use git pull --rebase explicitly every time.

Gotcha 2: Rebase rewrites history — never rebase shared branches. git rebase replaces commit hashes. If you rebase a branch that others have pulled, their history diverges and the result is a painful force-push situation. Only rebase branches that are yours alone. Never rebase main.

Gotcha 3: git commit --amend after push requires force push. If you amend a commit that’s already on the remote, you need git push --force-with-lease (not --force). --force-with-lease checks that no one else has pushed to the branch since you last fetched — safer than a blind force push. Never force push to main.

Gotcha 4: Squash merging loses commit attribution in history. When you squash-merge a PR, all commits become one commit authored by the PR author. If you care about blame granularity per commit, use rebase merge. For feature work, squash is fine and keeps main history clean.

Gotcha 5: git add . includes files you don’t want. Unlike Visual Studio which shows you a diff before checkin, git add . stages everything including generated files, temp files, or secrets. Use git add -p for interactive staging, or be explicit with file paths. Always review git status before committing.

Gotcha 6: Conventional commits are not enforced by default. The format is a convention. Nobody stops you from committing "fixed stuff". Teams enforce it with a commitlint pre-commit hook. If your project uses one, git commit will fail on non-conforming messages — read the error before trying to bypass it.


Hands-On Exercise

Set up a local repo with our full workflow:

# 1. Create and initialize a repo
mkdir git-practice && cd git-practice
git init
git commit --allow-empty -m "chore: initial commit"

# 2. Create .gitignore
cat > .gitignore << 'EOF'
node_modules/
dist/
.env
EOF
git add .gitignore
git commit -m "chore: add gitignore"

# 3. Simulate feature work
git switch -c feat/add-greeting
echo 'export const greet = (name: string) => `Hello, ${name}`;' > greet.ts
git add greet.ts
git commit -m "feat: add greet function"

echo 'export const farewell = (name: string) => `Goodbye, ${name}`;' >> greet.ts
git add greet.ts
git commit -m "feat: add farewell function"

# 4. Interactive rebase to squash both into one
git rebase -i HEAD~2
# Change second 'pick' to 'fixup', save

# 5. Verify clean history
git log --oneline

# 6. Practice stash
echo "work in progress" >> greet.ts
git stash push -m "WIP: adding more functions"
git stash list
git stash pop

# 7. Practice bisect
git bisect start
git bisect good HEAD~1
git bisect bad HEAD
git bisect reset

If you have gh authenticated, push to GitHub and practice PR creation:

gh repo create git-practice --private --source=. --push
gh pr create --title "feat: add greeting functions" --body "Practice PR"
gh pr view --web

Quick Reference

# Daily workflow
git switch -c feat/my-feature        # New branch
git add -p                           # Stage interactively
git commit -m "feat(scope): message" # Conventional commit
git push -u origin HEAD              # Push + set upstream
gh pr create                         # Open PR

# Keeping branch current
git fetch origin
git rebase origin/main

# Clean up history before PR
git rebase -i HEAD~N                 # Squash N commits

# Stash
git stash push -m "description"
git stash pop

# Cherry-pick
git cherry-pick <hash>

# Find regression
git bisect start
git bisect bad
git bisect good <tag-or-hash>
git bisect reset

# GitHub CLI
gh pr list
gh pr checkout <number>
gh pr review <number> --approve
gh pr merge --squash --delete-branch
gh run list
gh issue list

Further Reading

GitHub Actions: From Azure Pipelines to Actions

For .NET engineers who know: Azure Pipelines YAML, build agents, variable groups, service connections, multi-stage pipelines You’ll learn: How GitHub Actions maps to every Azure Pipelines concept and how to build a complete CI/CD workflow for a Node.js/TypeScript project Time: 15-20 minutes


The .NET Way (What You Already Know)

A typical Azure Pipelines YAML for a .NET app:

# azure-pipelines.yml
trigger:
  branches:
    include:
      - main

pool:
  vmImage: 'ubuntu-latest'

variables:
  - group: production-secrets

stages:
  - stage: Build
    jobs:
      - job: BuildJob
        steps:
          - task: UseDotNet@2
            inputs:
              version: '8.x'
          - script: dotnet restore
          - script: dotnet build --no-restore
          - task: DotNetCoreCLI@2
            inputs:
              command: 'test'

  - stage: Deploy
    dependsOn: Build
    condition: succeeded()
    jobs:
      - deployment: DeployProd
        environment: production
        strategy:
          runOnce:
            deploy:
              steps:
                - task: AzureWebApp@1
                  inputs:
                    azureSubscription: 'MyServiceConnection'
                    appName: 'my-app'

GitHub Actions serves the same purpose with a different YAML dialect and different primitives — but the mental model translates almost 1:1.


The GitHub Actions Way

Concept Mapping

Azure PipelinesGitHub ActionsNotes
PipelineWorkflowDefined in .github/workflows/*.yml
StageJobJobs can depend on other jobs
JobJobSame concept
StepStepSame concept
TaskActionuses: actions/checkout@v4
Script steprun: stepInline shell commands
Agent poolRunnerruns-on: ubuntu-latest
Self-hosted agentSelf-hosted runnerSame concept, different setup
Variable groupRepository/Org secretsManaged in GitHub Settings
Service connectionSecret (token/key)No special object, just a secret
Pipeline variableenv: or vars:vars for non-sensitive, secrets for sensitive
Triggeron:Push, PR, schedule, manual
Conditionif:if: github.ref == 'refs/heads/main'
TemplateReusable workflow.github/workflows/ called with workflow_call
Artifactactions/upload-artifactUpload/download between jobs
Environment (deployment)environment:Approval gates, protection rules
Build numbergithub.run_numberAuto-incrementing integer
Agent capabilitiesRunner labelsruns-on: [self-hosted, linux, x64]

Workflow File Structure

A GitHub Actions workflow lives in .github/workflows/:

.github/
  workflows/
    ci.yml          # Runs on every PR and push to main
    cd.yml          # Deploys after CI passes on main
    scheduled.yml   # Nightly jobs, cron tasks
    release.yml     # Triggered by tag push

The basic structure:

name: CI                          # Displayed in GitHub UI

on:                               # Triggers (equivalent to "trigger:" in ADO)
  push:
    branches: [main]
  pull_request:
    branches: [main]

env:                              # Workflow-level environment variables
  NODE_VERSION: '20'

jobs:
  build:                          # Job ID (used in "needs:" references)
    name: Build and Test          # Display name
    runs-on: ubuntu-latest        # Runner

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}

      - name: Install dependencies
        run: npm ci

      - name: Run tests
        run: npm test

Triggers

on:
  # Push to specific branches
  push:
    branches: [main, 'release/**']
    paths:
      - 'src/**'
      - 'package*.json'
    paths-ignore:
      - '**.md'

  # PRs targeting specific branches
  pull_request:
    branches: [main]
    types: [opened, synchronize, reopened]

  # Scheduled (cron syntax)
  schedule:
    - cron: '0 2 * * 1'          # 2am every Monday UTC

  # Manual trigger with inputs
  workflow_dispatch:
    inputs:
      environment:
        description: 'Target environment'
        required: true
        default: 'staging'
        type: choice
        options: [staging, production]

  # Called by another workflow
  workflow_call:
    inputs:
      node-version:
        type: string
        default: '20'
    secrets:
      DATABASE_URL:
        required: true

Job Dependencies

Jobs run in parallel by default. Use needs: to create dependencies — equivalent to Azure Pipelines stage dependsOn:

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: npm ci
      - run: npm run lint

  typecheck:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: npm ci
      - run: npm run typecheck

  test:
    runs-on: ubuntu-latest
    needs: [lint, typecheck]          # Waits for both to pass
    steps:
      - uses: actions/checkout@v4
      - run: npm ci
      - run: npm test

  build:
    runs-on: ubuntu-latest
    needs: test                       # Waits for test to pass
    steps:
      - uses: actions/checkout@v4
      - run: npm ci && npm run build

  deploy:
    runs-on: ubuntu-latest
    needs: build
    if: github.ref == 'refs/heads/main'   # Only on main branch
    environment: production               # Requires environment approval
    steps:
      - name: Deploy
        run: echo "Deploying..."

Secrets and Variables

Non-sensitive config — use repository Variables (Settings > Secrets and Variables > Variables):

env:
  APP_URL: ${{ vars.APP_URL }}
  NODE_ENV: production

Sensitive values — use Secrets (Settings > Secrets and Variables > Secrets):

steps:
  - name: Deploy to Render
    env:
      RENDER_API_KEY: ${{ secrets.RENDER_API_KEY }}
    run: |
      curl -X POST \
        -H "Authorization: Bearer $RENDER_API_KEY" \
        https://api.render.com/v1/services/srv-xxx/deploys

Organization-level secrets are available to all repos in an org — equivalent to a shared ADO variable group. Set them in Organization Settings.

Environment secrets are scoped to a specific deployment environment:

jobs:
  deploy:
    environment: production           # Unlocks environment-level secrets
    steps:
      - name: Migrate DB
        env:
          DATABASE_URL: ${{ secrets.DATABASE_URL }}   # From "production" env
        run: npm run db:migrate

Caching node_modules

This is one of the most important optimizations. Without caching, npm ci runs a full install every time — slow and costly:

steps:
  - uses: actions/checkout@v4

  - uses: actions/setup-node@v4
    with:
      node-version: '20'
      cache: 'npm'                    # Built-in cache for npm lockfile

  - run: npm ci                       # Uses cache if lockfile unchanged

For pnpm (which we prefer):

steps:
  - uses: actions/checkout@v4

  - uses: pnpm/action-setup@v4
    with:
      version: 9

  - uses: actions/setup-node@v4
    with:
      node-version: '20'
      cache: 'pnpm'

  - run: pnpm install --frozen-lockfile

Manual cache control for monorepos or unusual setups:

- uses: actions/cache@v4
  with:
    path: |
      ~/.pnpm-store
      node_modules
      packages/*/node_modules
    key: ${{ runner.os }}-pnpm-${{ hashFiles('**/pnpm-lock.yaml') }}
    restore-keys: |
      ${{ runner.os }}-pnpm-

Matrix Builds

Run the same job across multiple Node versions or operating systems — equivalent to multi-configuration builds in Azure Pipelines:

jobs:
  test:
    runs-on: ${{ matrix.os }}
    strategy:
      matrix:
        os: [ubuntu-latest, windows-latest]
        node: ['18', '20', '22']
      fail-fast: false                # Don't cancel all if one fails
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: ${{ matrix.node }}
      - run: npm ci
      - run: npm test

Matrix with exclusions:

strategy:
  matrix:
    os: [ubuntu-latest, windows-latest]
    node: ['18', '20']
    exclude:
      - os: windows-latest
        node: '18'                    # Skip this combination

Artifacts: Passing Build Output Between Jobs

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: npm ci && npm run build

      - name: Upload build artifact
        uses: actions/upload-artifact@v4
        with:
          name: dist
          path: dist/
          retention-days: 7

  deploy:
    needs: build
    runs-on: ubuntu-latest
    steps:
      - name: Download build artifact
        uses: actions/download-artifact@v4
        with:
          name: dist
          path: dist/

      - name: Deploy
        run: ./scripts/deploy.sh

Reusable Workflows

Equivalent to Azure Pipelines templates. Extract shared logic into a callable workflow:

# .github/workflows/_shared-test.yml
name: Shared Test Job
on:
  workflow_call:
    inputs:
      node-version:
        type: string
        default: '20'
    secrets:
      DATABASE_URL:
        required: true

jobs:
  test:
    runs-on: ubuntu-latest
    env:
      DATABASE_URL: ${{ secrets.DATABASE_URL }}
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: ${{ inputs.node-version }}
      - run: npm ci
      - run: npm test

Call it from another workflow:

# .github/workflows/ci.yml
jobs:
  test:
    uses: ./.github/workflows/_shared-test.yml
    with:
      node-version: '20'
    secrets:
      DATABASE_URL: ${{ secrets.DATABASE_URL }}

Self-Hosted Runners

When you need specific hardware, network access, or to avoid GitHub’s runner costs:

jobs:
  deploy:
    runs-on: [self-hosted, linux, production]   # Match runner labels
    steps:
      - run: echo "Running on our own machine"

Register a runner in GitHub: Settings > Actions > Runners > New self-hosted runner. Follow the setup script. The runner runs as a service on any Linux/macOS/Windows machine.

When to use self-hosted:

  • Need access to private network resources (databases, internal services)
  • Need specific hardware (GPU, high memory)
  • High volume and GitHub’s per-minute costs add up
  • Compliance requires builds not leaving your infrastructure

Complete CI/CD Workflow

Here is a production-ready workflow for a Node.js/TypeScript project:

# .github/workflows/ci.yml
name: CI

on:
  pull_request:
    branches: [main]
  push:
    branches: [main]

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true           # Cancel previous run if new push arrives

env:
  NODE_VERSION: '20'

jobs:
  lint:
    name: Lint
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: pnpm/action-setup@v4
        with:
          version: 9
      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'pnpm'
      - run: pnpm install --frozen-lockfile
      - run: pnpm run lint

  typecheck:
    name: Type Check
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: pnpm/action-setup@v4
        with:
          version: 9
      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'pnpm'
      - run: pnpm install --frozen-lockfile
      - run: pnpm run typecheck

  test:
    name: Test
    runs-on: ubuntu-latest
    needs: [lint, typecheck]
    services:
      postgres:
        image: postgres:16
        env:
          POSTGRES_USER: test
          POSTGRES_PASSWORD: test
          POSTGRES_DB: testdb
        ports:
          - 5432:5432
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5

    env:
      DATABASE_URL: postgresql://test:test@localhost:5432/testdb

    steps:
      - uses: actions/checkout@v4
      - uses: pnpm/action-setup@v4
        with:
          version: 9
      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'pnpm'
      - run: pnpm install --frozen-lockfile
      - run: pnpm run db:migrate
      - run: pnpm test --coverage

      - name: Upload coverage
        uses: actions/upload-artifact@v4
        if: always()
        with:
          name: coverage
          path: coverage/

  build:
    name: Build
    runs-on: ubuntu-latest
    needs: test
    steps:
      - uses: actions/checkout@v4
      - uses: pnpm/action-setup@v4
        with:
          version: 9
      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'pnpm'
      - run: pnpm install --frozen-lockfile
      - run: pnpm run build

      - name: Upload build artifacts
        uses: actions/upload-artifact@v4
        with:
          name: build-output
          path: dist/
          retention-days: 3

  deploy:
    name: Deploy to Production
    runs-on: ubuntu-latest
    needs: build
    if: github.ref == 'refs/heads/main' && github.event_name == 'push'
    environment:
      name: production
      url: https://myapp.example.com

    steps:
      - name: Download build artifacts
        uses: actions/download-artifact@v4
        with:
          name: build-output
          path: dist/

      - name: Deploy to Render
        env:
          RENDER_DEPLOY_HOOK: ${{ secrets.RENDER_DEPLOY_HOOK }}
        run: |
          curl -X POST "$RENDER_DEPLOY_HOOK"

Expressions and Contexts

GitHub Actions has a template expression language for conditionals and dynamic values:

# Context variables
${{ github.sha }}              # Commit SHA
${{ github.ref }}              # refs/heads/main
${{ github.actor }}            # Username who triggered
${{ github.event_name }}       # push, pull_request, etc.
${{ runner.os }}               # Linux, Windows, macOS
${{ job.status }}              # success, failure, cancelled

# Functions
${{ hashFiles('**/package-lock.json') }}
${{ contains(github.ref, 'main') }}
${{ startsWith(github.ref, 'refs/tags/') }}
${{ format('Hello {0}', github.actor) }}

# Conditionals on steps
if: github.ref == 'refs/heads/main'
if: failure()                          # Run only if previous step failed
if: always()                           # Always run (like finally)
if: success() && github.event_name == 'push'
if: contains(github.event.pull_request.labels.*.name, 'deploy-preview')

Key Differences

BehaviorAzure PipelinesGitHub Actions
File locationRoot: azure-pipelines.yml.github/workflows/*.yml
Parallel jobsStages sequential, jobs parallelAll jobs parallel unless needs:
Secret maskingAutomaticAutomatic
Approval gatesEnvironment checksEnvironment protection rules
Built-in tasks400+ Azure tasksCommunity marketplace + built-ins
YAML anchorsSupportedNot supported — use reusable workflows
Skip CI[skip ci] in commit message[skip ci] or [no ci] in commit
Max job timeout360 minutes360 minutes (6 hours)
Concurrency controlNo built-inconcurrency: key
Docker service containersservices: blockservices: block (same syntax)

Gotchas for .NET Engineers

Gotcha 1: Jobs start fresh with no shared filesystem. Each job runs in a completely isolated environment. Files written in one job are not available in another unless you use actions/upload-artifact and actions/download-artifact. This trips up .NET engineers who expect the workspace to persist across stages like it does in Azure Pipelines agents that reuse workspaces.

Gotcha 2: Secrets are not available in pull requests from forks. For security reasons, GitHub does not expose secrets.* to workflows triggered by PRs from forked repositories. If you see empty secret values in a fork-triggered run, this is why. Use the pull_request_target event with extreme caution — it runs with repo access — or design your CI to not need secrets for basic checks.

Gotcha 3: concurrency: is critical for deployments. Without concurrency: limits, pushing twice quickly can result in two concurrent deployments. The second deploy might finish before the first, leaving an old build in production. Always set concurrency: on deploy jobs with cancel-in-progress: false (cancel CI runs, but never cancel a deploy mid-flight).

concurrency:
  group: deploy-production
  cancel-in-progress: false     # Never interrupt a running deploy

Gotcha 4: Service containers need health checks or they’ll fail silently. When running a Postgres service container for tests, the container starts before Postgres is ready to accept connections. Without the options: --health-cmd pg_isready block, your migration step will fail with a connection error. Always add health check options to service containers.

Gotcha 5: actions/cache hits don’t guarantee fresh content. Cache keys are based on a hash (e.g., hashFiles('**/pnpm-lock.yaml')). If the lockfile hasn’t changed, the cache is used — which means if a package was published with a bug and you need to force a clean install, you must either change the lockfile or bust the cache by changing the cache key prefix.

Gotcha 6: GitHub-hosted runners are ephemeral — no tool persistence. Azure DevOps agents can accumulate tools between runs if you manage the agent pool. GitHub-hosted runners are torn down after every job. Every tool installation must be in the workflow. This is actually better for reproducibility, but it means you cannot rely on anything pre-installed beyond what’s documented in the runner image manifest.


Hands-On Exercise

Create a working CI workflow for a minimal TypeScript project:

mkdir actions-practice && cd actions-practice
git init
mkdir -p .github/workflows src

# Create a minimal TypeScript setup
cat > package.json << 'EOF'
{
  "name": "actions-practice",
  "version": "1.0.0",
  "scripts": {
    "build": "tsc",
    "typecheck": "tsc --noEmit",
    "test": "node --test",
    "lint": "echo 'lint passed'"
  },
  "devDependencies": {
    "typescript": "^5.0.0"
  }
}
EOF

cat > tsconfig.json << 'EOF'
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "commonjs",
    "outDir": "dist",
    "strict": true
  },
  "include": ["src/**/*"]
}
EOF

cat > src/greet.ts << 'EOF'
export const greet = (name: string): string => `Hello, ${name}`;
EOF

cat > src/greet.test.ts << 'EOF'
import { strict as assert } from 'node:assert';
import { test } from 'node:test';
import { greet } from './greet.js';

test('greet returns greeting', () => {
  assert.equal(greet('World'), 'Hello, World');
});
EOF

Now create the workflow:

cat > .github/workflows/ci.yml << 'EOF'
name: CI

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
      - run: npm ci
      - run: npm run lint
      - run: npm run typecheck
      - run: npm run build
      - run: npm test
EOF

git add .
git commit -m "ci: add GitHub Actions workflow"

Push to GitHub and watch the Actions tab:

gh repo create actions-practice --private --source=. --push
gh run watch    # Watch the workflow in real time

Quick Reference

# Trigger patterns
on:
  push:
    branches: [main]
  pull_request:
    branches: [main]
  schedule:
    - cron: '0 9 * * 1-5'
  workflow_dispatch:

# Job with dependencies
jobs:
  my-job:
    runs-on: ubuntu-latest
    needs: [other-job]
    if: github.ref == 'refs/heads/main'
    environment: production
    steps:
      - uses: actions/checkout@v4
      - run: echo "hello"

# Secrets and variables
env:
  MY_SECRET: ${{ secrets.MY_SECRET }}
  MY_VAR: ${{ vars.MY_VAR }}

# Cache pnpm
- uses: pnpm/action-setup@v4
  with:
    version: 9
- uses: actions/setup-node@v4
  with:
    node-version: '20'
    cache: 'pnpm'
- run: pnpm install --frozen-lockfile

# Artifacts
- uses: actions/upload-artifact@v4
  with:
    name: dist
    path: dist/
- uses: actions/download-artifact@v4
  with:
    name: dist
    path: dist/

# Postgres service container
services:
  postgres:
    image: postgres:16
    env:
      POSTGRES_PASSWORD: test
    ports:
      - 5432:5432
    options: >-
      --health-cmd pg_isready
      --health-interval 10s
      --health-retries 5

# Cancel in-progress
concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

Further Reading

Render: From Azure App Service to Render

For .NET engineers who know: Azure App Service, Azure Static Web Apps, Azure Database for PostgreSQL, Azure Cache for Redis, deployment slots You’ll learn: How Render maps to Azure hosting services, where it simplifies your workflow, and how to deploy a Next.js frontend and NestJS API to production Time: 15-20 minutes


The .NET Way (What You Already Know)

Hosting a .NET web app on Azure involves:

  • Azure App Service — managed PaaS for web apps and APIs
  • Azure Static Web Apps — CDN-backed hosting for SPAs
  • Azure Database for PostgreSQL — managed Postgres
  • Azure Cache for Redis — managed Redis
  • Deployment slots — staging environments with swap capability
  • Azure Key Vault — secrets management
  • Application Insights — monitoring and telemetry
  • Virtual Networks — private network isolation
  • App Service Plans — the billing unit (B1, P1v3, etc.)

The Azure experience is powerful and enterprise-grade. It is also complex: you navigate resource groups, service plans, connection strings, managed identities, private endpoints, and a UI that changes frequently.

Render trades some of that power for radical simplicity. For most JS/TS projects, you’ll ship faster on Render and spend less time on infrastructure.


The Render Way

Concept Mapping

Azure ServiceRender EquivalentNotes
App Service (Linux)Web ServiceAuto-deploys from GitHub
Static Web AppsStatic SiteCDN-backed, free tier available
Azure Database for PostgreSQLPostgreSQLManaged, automatic backups
Azure Cache for RedisRedisManaged Redis instance
WebJobs / Azure Functions (timer)Cron JobsScheduled commands
Deployment SlotsPreview EnvironmentsAuto-created per branch/PR
App Service PlanInstance Type (Starter/Standard/Pro)Vertical scaling options
App SettingsEnvironment VariablesPer-service, in dashboard or YAML
Key VaultSecret Files + Env VarsRender encrypts env vars at rest
Application InsightsRender Metrics + LogsBasic metrics, log streaming
Custom Domain + TLSCustom DomainsFree TLS via Let’s Encrypt
Health checksHealth Check PathConfigures restart on failure
Deployment CenterAuto-deploy from GitHubSet repo + branch, done
ARM Templates / Biceprender.yaml (Infrastructure as Code)Optional but recommended

Render Service Types

Web Service — for APIs, full-stack apps, anything with a server process:

  • Runs any Docker image or auto-detects Node.js, Python, Ruby, Go, Rust
  • Persistent process with a public HTTPS URL
  • Auto-restarts on crash
  • Zero-downtime deploys (waits for new instance to be healthy)

Static Site — for SPAs, Next.js static export, documentation:

  • Serves files from a build output directory
  • Global CDN
  • Free tier available
  • Supports custom redirects and rewrites

Background Worker — like a Web Service but with no public port:

  • For queue consumers, scheduled background jobs, etc.

Cron Job — runs a command on a schedule:

  • Cron expression syntax
  • Runs in an isolated container
  • Logs available in dashboard

PostgreSQL — managed PostgreSQL 14/15/16:

  • Automatic daily backups
  • Connection pooling built-in
  • Available from other Render services via internal URL (no public exposure needed)

Redis — managed Redis:

  • Available via internal URL within your Render account
  • Persistence options

Deploying a Web Service

The simplest path: connect your GitHub repo in the Render dashboard, set a build command and start command, and Render handles the rest.

For a NestJS API:

Build Command: npm ci && npm run build
Start Command: node dist/main.js

For a Next.js app (server-side rendering):

Build Command: npm ci && npm run build
Start Command: npm start

Environment Variables: Set them in the Render dashboard under your service’s “Environment” tab, or define them in render.yaml (see below).


render.yaml — Infrastructure as Code

Define your entire stack in a single file at the repo root:

# render.yaml
services:
  # NestJS API
  - type: web
    name: my-api
    runtime: node
    region: oregon
    plan: starter
    buildCommand: pnpm install --frozen-lockfile && pnpm run build
    startCommand: node dist/main.js
    healthCheckPath: /health
    envVars:
      - key: NODE_ENV
        value: production
      - key: PORT
        value: 10000
      - key: DATABASE_URL
        fromDatabase:
          name: my-postgres
          property: connectionString
      - key: REDIS_URL
        fromService:
          name: my-redis
          type: redis
          property: connectionString
      - key: JWT_SECRET
        generateValue: true           # Render generates a random secret
      - key: SENDGRID_API_KEY
        sync: false                   # Tells Render: prompt for this in dashboard

  # Next.js frontend
  - type: web
    name: my-frontend
    runtime: node
    region: oregon
    plan: starter
    buildCommand: pnpm install --frozen-lockfile && pnpm run build
    startCommand: pnpm start
    healthCheckPath: /
    envVars:
      - key: NEXT_PUBLIC_API_URL
        value: https://my-api.onrender.com

  # Background worker
  - type: worker
    name: my-queue-worker
    runtime: node
    buildCommand: pnpm install --frozen-lockfile && pnpm run build
    startCommand: node dist/workers/queue.js
    envVars:
      - key: DATABASE_URL
        fromDatabase:
          name: my-postgres
          property: connectionString

  # Nightly cleanup job
  - type: cron
    name: db-cleanup
    runtime: node
    schedule: "0 3 * * *"            # 3am UTC every day
    buildCommand: pnpm install --frozen-lockfile && pnpm run build
    startCommand: node dist/scripts/cleanup.js
    envVars:
      - key: DATABASE_URL
        fromDatabase:
          name: my-postgres
          property: connectionString

databases:
  - name: my-postgres
    databaseName: myapp
    user: myapp
    plan: starter
    region: oregon

  - name: my-redis
    plan: starter
    region: oregon

Apply this with:

# Install Render CLI
npm install -g @render-oss/render-cli

# Apply (creates or updates services)
render deploy --yes

Or push render.yaml to your repo and Render auto-detects it on first connection.


Auto-Deploy from GitHub

  1. Connect your GitHub account in Render dashboard
  2. Create a new Web Service
  3. Select your repo and branch (main)
  4. Set build and start commands
  5. Every push to main triggers a new deploy automatically

How it works:

  • Render clones your repo at the pushed commit
  • Runs the build command in an isolated build environment
  • If build succeeds, starts the new instance
  • Health check passes → traffic switches to new instance (zero-downtime)
  • If health check fails → old instance keeps running, deploy marked as failed

To disable auto-deploy (manual deploys only):

services:
  - type: web
    name: my-api
    autoDeploy: false

Or trigger deploys via the Render API from your GitHub Actions workflow (covered in 6.2):

curl -X POST \
  -H "Authorization: Bearer $RENDER_API_KEY" \
  "https://api.render.com/v1/services/$SERVICE_ID/deploys" \
  -H "Content-Type: application/json" \
  -d '{}'

Preview Environments

Render’s Preview Environments create a complete copy of your stack for every pull request — equivalent to deployment slots but automatic and branch-scoped.

Configure in render.yaml:

previewsEnabled: true
previewPlan: starter

services:
  - type: web
    name: my-api
    previewsEnabled: true
    # Preview URL will be: my-api-pr-42.onrender.com

When a PR is opened, Render:

  1. Creates a new service instance with the PR branch code
  2. Provisions a temporary database (if configured)
  3. Posts the preview URL to the GitHub PR as a comment
  4. Tears down everything when the PR is merged or closed

This replaces the Azure App Service deployment slots pattern but requires zero manual configuration per PR.


Health Checks

Configure a health check endpoint so Render knows when your service is actually ready:

// NestJS: src/health/health.controller.ts
import { Controller, Get } from '@nestjs/common';

@Controller('health')
export class HealthController {
  @Get()
  check() {
    return { status: 'ok', timestamp: new Date().toISOString() };
  }
}

In render.yaml:

services:
  - type: web
    name: my-api
    healthCheckPath: /health

Render polls this endpoint after deploy. If it returns a non-2xx status within the timeout window, the deploy is rolled back.


Custom Domains and TLS

  1. Add your domain in Render dashboard → Service → Settings → Custom Domains
  2. Add the DNS records shown (CNAME or A record)
  3. TLS certificate is provisioned automatically via Let’s Encrypt
  4. Certificate auto-renews

For apex domains (example.com vs www.example.com), Render supports A records pointing to their load balancer IP. Use www with a CNAME when possible for better reliability.


Scaling

Render’s scaling model is simpler than Azure’s:

Vertical scaling — change instance type:

PlanRAMCPUUse Case
Starter512 MB0.5 CPUDev, low traffic
Standard2 GB1 CPUMost production apps
Pro4 GB2 CPUHigh traffic APIs
Pro Plus8 GB4 CPUMemory-intensive workloads

Horizontal scaling — multiple instances (Standard plan and above):

services:
  - type: web
    name: my-api
    plan: standard
    scaling:
      minInstances: 2
      maxInstances: 10
      targetMemoryPercent: 80       # Scale out when memory > 80%
      targetCPUPercent: 75          # Scale out when CPU > 75%

Note: horizontal scaling requires sticky sessions or stateless design. Your app should not store state in-process (use Redis for session storage).


Monitoring and Logs

Logs:

# Install Render CLI
render logs my-api --tail              # Tail live logs
render logs my-api --since 1h         # Last hour

Or view in the dashboard under your service’s “Logs” tab. Logs stream in real time.

Metrics:

The dashboard shows CPU, memory, and request throughput graphs. For detailed APM, integrate a third-party:

// src/main.ts — Sentry integration
import * as Sentry from '@sentry/node';

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  environment: process.env.NODE_ENV,
});

For structured logging that Render captures:

import pino from 'pino';

const logger = pino({
  level: process.env.LOG_LEVEL ?? 'info',
  // Render captures stdout — just log JSON and it's searchable
});

Render vs Azure: Honest Comparison

Where Render wins:

CapabilityRender Advantage
Setup timeMinutes vs hours
PricingPredictable flat rates, no egress fees
Preview environmentsAutomatic, zero config
Developer experienceSimple dashboard, clear logs
Free tierStatic sites and one web service free
Managed TLSAutomatic, no cert management

Where Azure wins:

CapabilityAzure Advantage
Virtual NetworksPrivate network isolation (Render services are publicly reachable)
Multi-regionRender is single-region per service
Enterprise complianceSOC 2 Type II, HIPAA, FedRAMP
ScaleRender tops out at ~20 instances; Azure scales to hundreds
Advanced networkingVNet integration, private endpoints, WAF
Existing Azure ecosystemIf you have Azure AD, CosmosDB, Service Bus

For most product teams with less than 100k daily active users, Render is the better choice. The operational simplicity compounds over time — fewer incidents, faster iteration, lower DevOps cost.


Full Deployment Example: Next.js + NestJS

Project structure:

my-project/
  apps/
    web/          # Next.js
    api/          # NestJS
  render.yaml

render.yaml:

services:
  - type: web
    name: my-project-api
    runtime: node
    region: oregon
    plan: starter
    rootDir: apps/api
    buildCommand: npm ci && npm run build
    startCommand: node dist/main.js
    healthCheckPath: /health
    previewsEnabled: true
    envVars:
      - key: NODE_ENV
        value: production
      - key: PORT
        value: 10000
      - key: DATABASE_URL
        fromDatabase:
          name: my-project-db
          property: connectionString
      - key: JWT_SECRET
        generateValue: true
      - key: CORS_ORIGIN
        value: https://my-project-web.onrender.com

  - type: web
    name: my-project-web
    runtime: node
    region: oregon
    plan: starter
    rootDir: apps/web
    buildCommand: npm ci && npm run build
    startCommand: npm start
    healthCheckPath: /
    previewsEnabled: true
    envVars:
      - key: NEXT_PUBLIC_API_URL
        value: https://my-project-api.onrender.com

databases:
  - name: my-project-db
    databaseName: myproject
    plan: starter
    region: oregon

NestJS health endpoint:

// apps/api/src/app.module.ts
import { Module } from '@nestjs/common';
import { TerminusModule } from '@nestjs/terminus';
import { HealthController } from './health/health.controller';

@Module({
  imports: [TerminusModule],
  controllers: [HealthController],
})
export class AppModule {}
// apps/api/src/health/health.controller.ts
import { Controller, Get } from '@nestjs/common';
import { HealthCheck, HealthCheckService, PrismaHealthIndicator } from '@nestjs/terminus';

@Controller('health')
export class HealthController {
  constructor(
    private health: HealthCheckService,
    private db: PrismaHealthIndicator,
  ) {}

  @Get()
  @HealthCheck()
  check() {
    return this.health.check([
      () => this.db.pingCheck('database'),
    ]);
  }
}

Next.js handles the PORT variable automatically, but ensure next.config.js doesn’t hardcode ports:

// apps/web/next.config.js
/** @type {import('next').NextConfig} */
const nextConfig = {
  async rewrites() {
    return [
      {
        source: '/api/:path*',
        destination: `${process.env.NEXT_PUBLIC_API_URL}/:path*`,
      },
    ];
  },
};

module.exports = nextConfig;

Deploy:

# Push render.yaml to your repo
git add render.yaml
git commit -m "chore(infra): add render.yaml"
git push origin main

# Render auto-deploys on push to main
# Check status:
render deploy --wait

Gotchas for .NET Engineers

Gotcha 1: Free and Starter tier services spin down after 15 minutes of inactivity. On the free tier and Starter plan, web services sleep when idle. The first request after sleep incurs a cold start of 30-60 seconds. This is normal for dev/staging. For production, use Standard plan or higher, which keeps instances always running.

Gotcha 2: Render uses ephemeral disk — nothing written to disk persists across deploys. Unlike App Service where you can write files to wwwroot and they persist, Render instances have a read-only filesystem (except /tmp). Store files in S3/R2 object storage, not on disk. Database migrations must be idempotent because they may run against an existing database.

Gotcha 3: Internal URLs are not the same as public URLs. Render services on the same account communicate via internal URLs (e.g., http://my-api:10000), not public HTTPS URLs. Internal communication is free and faster. If your frontend calls the API using the public URL internally, you’re adding unnecessary latency and egress. Set NEXT_PUBLIC_API_URL to the public URL (for browser clients) and a separate server-side API_URL to the internal URL.

Gotcha 4: PostgreSQL connection pool exhaustion. Render’s managed Postgres has a default max connection limit (25 for Starter). Node.js apps often open more connections than .NET apps because of async concurrency. Use a connection pooler. Prisma recommends connection limit settings:

DATABASE_URL=postgresql://user:pass@host:5432/db?connection_limit=5&pool_timeout=10

Or use pgbouncer (Render offers this as an add-on) for high-concurrency workloads.

Gotcha 5: render.yaml applies on the first deploy but not all subsequent changes automatically. Not all render.yaml fields are live-updated on every push. Plan changes, instance count, and some infrastructure changes require applying through the Render dashboard or CLI. Environment variables defined in render.yaml with sync: false must be set manually in the dashboard — they are placeholders, not values.

Gotcha 6: Preview environments share the main database by default. Unless you explicitly configure a separate database for previews, preview environment services will use the same DATABASE_URL as your main service — which means preview PRs can mutate production data. Always scope preview environments to use a separate test database or disable the database in preview configuration.


Hands-On Exercise

Deploy a minimal NestJS API to Render:

# Create a minimal NestJS app
npm install -g @nestjs/cli
nest new render-demo
cd render-demo

# Add a health endpoint
nest generate controller health

# Edit src/health/health.controller.ts
cat > src/health/health.controller.ts << 'EOF'
import { Controller, Get } from '@nestjs/common';

@Controller('health')
export class HealthController {
  @Get()
  check() {
    return {
      status: 'ok',
      timestamp: new Date().toISOString(),
      node: process.version,
    };
  }
}
EOF

# Create render.yaml
cat > render.yaml << 'EOF'
services:
  - type: web
    name: render-demo-api
    runtime: node
    plan: starter
    buildCommand: npm ci && npm run build
    startCommand: node dist/main.js
    healthCheckPath: /health
    envVars:
      - key: NODE_ENV
        value: production
      - key: PORT
        value: 10000
EOF

# Push to GitHub
git add .
git commit -m "feat: add NestJS API with render.yaml"
gh repo create render-demo --private --source=. --push

# Now go to render.com, connect the repo, and deploy
# Or use the CLI:
render deploy --yes

Visit your service URL + /health to confirm the response.


Quick Reference

# render.yaml service types
- type: web           # Web service (HTTP)
- type: worker        # Background worker (no HTTP)
- type: cron          # Scheduled job

# Common env var patterns
envVars:
  - key: NODE_ENV
    value: production
  - key: DATABASE_URL
    fromDatabase:
      name: my-db
      property: connectionString
  - key: SECRET
    generateValue: true
  - key: MANUAL_SECRET
    sync: false

# Plans
plan: starter         # 512MB RAM, 0.5 CPU (~$7/mo)
plan: standard        # 2GB RAM, 1 CPU (~$25/mo)
plan: pro             # 4GB RAM, 2 CPU (~$85/mo)

# Health check
healthCheckPath: /health

# Preview environments
previewsEnabled: true

# Scaling (Standard+ only)
scaling:
  minInstances: 1
  maxInstances: 5
  targetCPUPercent: 75
# Render CLI
render deploy --yes             # Apply render.yaml
render deploy --wait            # Wait for deploy to complete
render logs <service-name> --tail
render services list
render env list <service-name>
render env set <service-name> KEY=value

Further Reading

Docker for Node.js: From .NET Images to Node.js Images

For .NET engineers who know: Docker for .NET (multi-stage builds, mcr.microsoft.com/dotnet images, Docker Compose) You’ll learn: How Node.js Dockerfiles differ from .NET Dockerfiles, how to optimize image size and layer caching, and how to set up a production-ready local dev environment with Docker Compose Time: 15-20 minutes


The .NET Way (What You Already Know)

A typical .NET multi-stage Dockerfile:

# Build stage
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /app
COPY *.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c Release -o /out

# Runtime stage
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS runtime
WORKDIR /app
COPY --from=build /out .
EXPOSE 8080
ENTRYPOINT ["dotnet", "MyApp.dll"]

The pattern is:

  1. Build in the SDK image (large)
  2. Copy compiled output to a runtime image (small)
  3. The compiled output is self-contained — no source, no build tools in production

Node.js follows the same multi-stage principle but with key differences: the build step compiles TypeScript, and the production stage needs node_modules (runtime dependencies), not just compiled output.


The Node.js Way

Image Choices

For .NET, the choice is simple: SDK for build, aspnet for runtime. Node.js has more options:

Base ImageSize (approx.)Use Case
node:20~1.1 GBFull Debian — development, debugging
node:20-slim~230 MBDebian with minimal packages — good default
node:20-alpine~60 MBAlpine Linux — smallest, but quirks exist
cgr.dev/chainguard/node~50 MBDistroless — minimal attack surface, no shell

Alpine is popular for size, but it uses musl libc instead of glibc. Most npm packages work fine, but native addons (compiled C++ bindings) often need rebuilding or fail entirely. If you use packages like sharp, bcrypt, or anything with native code, test Alpine thoroughly before committing to it.

Slim is the pragmatic default: glibc compatibility, small enough, Debian package ecosystem available if needed.

Distroless (like Chainguard or gcr.io/distroless/nodejs20-debian12) has no shell, no package manager, no utilities — only Node.js. This is the gold standard for security: an attacker who gets code execution in the container has no tools to pivot with. The tradeoff is that docker exec debugging doesn’t work.


Multi-Stage Dockerfile for TypeScript

The key insight: Node.js needs node_modules at runtime (for require() to work). You can’t just copy compiled .js files like you copy a .NET DLL. The trick is installing only production dependencies in the final stage:

# ============================================================
# Stage 1: Install ALL dependencies (including devDependencies)
# ============================================================
FROM node:20-slim AS deps
WORKDIR /app

# Copy package files first — this layer is cached unless they change
COPY package.json package-lock.json ./

# ci = clean install, respects lockfile
RUN npm ci

# ============================================================
# Stage 2: Build (TypeScript -> JavaScript)
# ============================================================
FROM node:20-slim AS build
WORKDIR /app

# Copy node_modules from deps stage
COPY --from=deps /app/node_modules ./node_modules

# Copy source
COPY . .

# Compile TypeScript
RUN npm run build

# ============================================================
# Stage 3: Production runtime
# ============================================================
FROM node:20-slim AS production
WORKDIR /app

# Set production environment
ENV NODE_ENV=production

# Create non-root user (see Gotchas)
RUN groupadd --gid 1001 nodejs && \
    useradd --uid 1001 --gid nodejs --shell /bin/bash --create-home nodejs

# Copy package files
COPY package.json package-lock.json ./

# Install ONLY production dependencies
RUN npm ci --omit=dev && npm cache clean --force

# Copy compiled output from build stage
COPY --from=build /app/dist ./dist

# Copy any other runtime assets (migrations, templates, etc.)
# COPY --from=build /app/prisma ./prisma

# Switch to non-root user
USER nodejs

# Expose the port your app listens on
EXPOSE 3000

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=30s --retries=3 \
  CMD node -e "require('http').get('http://localhost:3000/health', r => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))"

# Start the app
CMD ["node", "dist/main.js"]

pnpm Variant

If your project uses pnpm (which we prefer in monorepos):

FROM node:20-slim AS base
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable

FROM base AS deps
WORKDIR /app
COPY package.json pnpm-lock.yaml ./
RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --frozen-lockfile

FROM base AS build
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN pnpm run build

FROM base AS production
WORKDIR /app
ENV NODE_ENV=production

RUN groupadd --gid 1001 nodejs && \
    useradd --uid 1001 --gid nodejs --shell /bin/bash --create-home nodejs

COPY package.json pnpm-lock.yaml ./
RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --prod --frozen-lockfile

COPY --from=build /app/dist ./dist
USER nodejs
EXPOSE 3000

HEALTHCHECK --interval=30s --timeout=10s --start-period=30s --retries=3 \
  CMD node -e "require('http').get('http://localhost:3000/health', r => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))"

CMD ["node", "dist/main.js"]

The --mount=type=cache flag (BuildKit feature) caches the pnpm store between builds without copying it into the image — equivalent to Docker layer caching but more granular.


.dockerignore

Like .gitignore but for Docker’s build context. Without it, COPY . . sends everything — including node_modules (often 500MB+) — to the Docker daemon:

# Dependencies (always installed fresh in the image)
node_modules/
**/node_modules/

# Build outputs
dist/
build/
.next/
.turbo/

# Environment files
.env
.env.*
!.env.example

# Editor
.vscode/
.idea/
*.swp

# Git
.git/
.gitignore

# Test files (not needed in production image)
**/*.test.ts
**/*.spec.ts
**/__tests__/
coverage/

# Docker
Dockerfile
docker-compose*.yml
.dockerignore

# CI
.github/

# Docs
*.md
docs/

# OS
.DS_Store
Thumbs.db

Without .dockerignore, a project with a 300MB node_modules sends all of that to the build context on every docker build. The build still works but it’s slow and wastes network IO.


Layer Caching for node_modules

The most important optimization in a Node.js Dockerfile is layer ordering. Docker caches layers — if a layer’s input hasn’t changed, it uses the cached result.

Wrong (busts cache on any file change):

COPY . .                      # Copies everything — any code change busts this layer
RUN npm ci                    # Reinstalls everything every time

Right (cache only busts when lockfile changes):

COPY package.json package-lock.json ./   # Only these two files
RUN npm ci                               # Cached unless lockfile changes
COPY . .                                 # Code changes don't affect npm ci cache
RUN npm run build

This is the same principle as the .NET pattern of copying .csproj first and running dotnet restore before copying source. The package manifest changes rarely; source code changes constantly.


Image Size Comparison

Build a Node.js API and compare:

# node:20 (full Debian)
docker build --target production -t api:full \
  --build-arg BASE=node:20 .
docker image inspect api:full --format='{{.Size}}' | numfmt --to=iec

# node:20-slim
docker build --target production -t api:slim \
  --build-arg BASE=node:20-slim .

# node:20-alpine
docker build --target production -t api:alpine \
  --build-arg BASE=node:20-alpine .

Typical results for a NestJS API:

  • node:20 base: ~1.4 GB
  • node:20-slim base: ~350 MB
  • node:20-alpine base: ~130 MB

The difference matters for:

  • Registry storage costs
  • Pull time in CI/CD (pulling 1.4 GB vs 130 MB is 10x different)
  • Attack surface (fewer packages = fewer vulnerabilities)

Running as Non-Root

By default, Node.js containers run as root. If someone exploits your app and gets shell access, they’re root inside the container — which can mean writing to mounted volumes, reading secrets from env, or attempting container escapes.

The fix is two lines:

RUN groupadd --gid 1001 nodejs && \
    useradd --uid 1001 --gid nodejs --shell /bin/bash --create-home nodejs

# ... copy files, install deps ...

USER nodejs

Ensure any directories your app writes to are owned by this user:

RUN mkdir -p /app/uploads && chown nodejs:nodejs /app/uploads
USER nodejs

For Alpine (which uses addgroup/adduser):

RUN addgroup --system --gid 1001 nodejs && \
    adduser --system --uid 1001 nodejs
USER nodejs

Docker Compose for Local Development

Docker Compose replaces the Azure Local Emulator + SQL Server LocalDB + manually started services pattern. Define everything your app needs and start it with one command:

# docker-compose.yml
version: '3.9'

services:
  api:
    build:
      context: .
      target: deps                  # Use the deps stage, not production
    volumes:
      - ./src:/app/src              # Mount source for hot-reload
      - ./dist:/app/dist
    command: npm run start:dev      # ts-node or nodemon for hot reload
    ports:
      - "3000:3000"
      - "9229:9229"                 # Node.js debugger port
    environment:
      NODE_ENV: development
      DATABASE_URL: postgresql://postgres:password@postgres:5432/myapp
      REDIS_URL: redis://redis:6379
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    restart: unless-stopped

  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: password
      POSTGRES_DB: myapp
    ports:
      - "5432:5432"                 # Expose for local tools (DBeaver, TablePlus)
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./scripts/init.sql:/docker-entrypoint-initdb.d/init.sql  # Seed script
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"                 # Expose for local inspection (Redis Insight)
    volumes:
      - redis_data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      timeout: 3s
      retries: 5

  # Optional: PgAdmin for database management (replaces SSMS)
  pgadmin:
    image: dpage/pgadmin4:latest
    environment:
      PGADMIN_DEFAULT_EMAIL: admin@local.dev
      PGADMIN_DEFAULT_PASSWORD: admin
    ports:
      - "5050:80"
    depends_on:
      - postgres
    profiles:
      - tools                       # Only starts with: docker compose --profile tools up

volumes:
  postgres_data:
  redis_data:

Usage:

# Start all services
docker compose up -d

# Start with tools (pgadmin)
docker compose --profile tools up -d

# View logs
docker compose logs -f api
docker compose logs -f postgres

# Restart just the API (after code changes if not using hot-reload)
docker compose restart api

# Open a shell in the running container
docker compose exec api sh

# Run a one-off command (like migrations)
docker compose exec api npm run db:migrate

# Stop everything
docker compose down

# Stop and remove volumes (reset database)
docker compose down -v

Development vs Production Compose Files

Use override files for environment-specific config:

# docker-compose.override.yml (loaded automatically in dev)
services:
  api:
    build:
      target: deps                  # Dev stage, not production
    volumes:
      - ./src:/app/src
    command: npm run start:dev
    environment:
      NODE_ENV: development
      LOG_LEVEL: debug
# docker-compose.prod.yml (explicit for production testing)
services:
  api:
    build:
      target: production
    environment:
      NODE_ENV: production
# Development (uses docker-compose.yml + docker-compose.override.yml automatically)
docker compose up

# Production simulation
docker compose -f docker-compose.yml -f docker-compose.prod.yml up

When Docker vs Render Native Builds

Render can build your Node.js app natively (without Docker) using its buildpack — just set a build command and it handles Node.js setup automatically. When should you use Docker instead?

Use Render’s native build when:

  • Standard Node.js app with no special system dependencies
  • You want the simplest possible deploy setup
  • Build time matters and you want to skip image build + push steps

Use Docker when:

  • You need specific system libraries (ImageMagick, ffmpeg, specific glibc version)
  • You want identical behavior between local dev and production
  • Your app has non-standard startup requirements
  • You’re deploying the same image to multiple environments (Render, other cloud, on-prem)
  • You need to control exactly what’s in the production environment

For Render with Docker:

# render.yaml
services:
  - type: web
    name: my-api
    runtime: docker                 # Use Dockerfile instead of buildpack
    dockerfilePath: ./Dockerfile
    dockerContext: .
    healthCheckPath: /health

BuildKit Features

Enable BuildKit for faster builds and advanced features:

# Enable for a single build
DOCKER_BUILDKIT=1 docker build .

# Enable globally (add to ~/.profile or ~/.zshrc)
export DOCKER_BUILDKIT=1

# Or use the newer syntax
docker buildx build .

BuildKit enables:

  • --mount=type=cache — persistent build cache (pnpm store, apt cache)
  • --mount=type=secret — pass secrets to build without baking them into layers
  • Parallel stage execution
  • Better output with progress reporting

Passing secrets at build time (e.g., for private npm registry):

# In Dockerfile
RUN --mount=type=secret,id=npmrc,target=/root/.npmrc \
    npm ci
# In build command
docker build --secret id=npmrc,src=.npmrc .

This is safer than ARG NPM_TOKEN which bakes the token into the image layer history.


Key Differences from .NET Docker

Concern.NETNode.js
Build artifactSelf-contained binary or DLL setCompiled JS + node_modules
Runtime image needsJust the .NET runtimeNode.js runtime + production node_modules
Image basemcr.microsoft.com/dotnet/aspnetnode:20-slim or Alpine
Non-root userapp user (sometimes pre-configured)Must create manually
Hot reload devVolume mount + dotnet watchVolume mount + nodemon or ts-node-dev
Native addonsRarely an issueWatch for musl/glibc conflicts on Alpine
Port8080 defaultAny port, typically 3000/4000
Build cache key.csproj restore hashpackage-lock.json or pnpm-lock.yaml

Gotchas for .NET Engineers

Gotcha 1: node_modules inside the container conflicts with local node_modules. When you volume-mount your project directory for hot-reload (-v ./src:/app/src), if you also mount the whole project (-v .:/app), the container’s node_modules gets replaced by your host machine’s node_modules. These may differ (different OS, architecture, native addon compilation). Fix: use named volume for node_modules:

volumes:
  - .:/app
  - node_modules:/app/node_modules    # Named volume takes precedence
volumes:
  node_modules:

Gotcha 2: Alpine native addon failures are silent until runtime. npm install on Alpine may succeed even if a native addon falls back to a pure-JS polyfill. The failure appears at runtime, not at build time. The package bcryptjs vs bcrypt is a common example: bcrypt is faster (native), bcryptjs is pure JS and always works on Alpine. Test your exact dependencies on Alpine before committing to it.

Gotcha 3: Layer cache busts propagate forward. If you change a layer, every subsequent layer is also rebuilt. This means the order of COPY and RUN statements matters enormously. Always copy only what each layer needs — COPY package.json package-lock.json ./ before RUN npm ci, not COPY . .. A common mistake is copying a README or tsconfig.json early and accidentally busting the npm install cache.

Gotcha 4: npm start vs node dist/main.js in production. npm start spawns npm as a parent process which then spawns node. This means signals (like SIGTERM from Docker during graceful shutdown) go to npm, which may not properly forward them to Node.js. Always use CMD ["node", "dist/main.js"] directly in production, not CMD ["npm", "start"]. If you must use npm scripts, use exec in the script: "start": "exec node dist/main.js".

Gotcha 5: EXPOSE does not actually expose ports. EXPOSE in a Dockerfile is documentation only — it does nothing at runtime. Ports are only actually published when you run docker run -p 3000:3000 or define ports: in Docker Compose. Do not confuse EXPOSE with port publishing. Include it anyway — tools like Docker Desktop and orchestrators use it for discovery.

Gotcha 6: Health check scripts must be in the image. If your health check uses curl (HEALTHCHECK CMD curl -f http://localhost:3000/health), but your image is node:20-alpine which doesn’t include curl, the health check fails at startup with a confusing error. Use a Node.js inline script instead (shown in the Dockerfile above) — it’s always available if Node.js is.


Hands-On Exercise

Build and run a minimal NestJS API in Docker:

mkdir docker-practice && cd docker-practice

# Initialize a minimal Node.js + TypeScript project
npm init -y
npm install express
npm install -D typescript @types/express @types/node ts-node

cat > tsconfig.json << 'EOF'
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "commonjs",
    "outDir": "dist",
    "rootDir": "src",
    "strict": true
  }
}
EOF

mkdir src
cat > src/index.ts << 'EOF'
import express from 'express';

const app = express();
const port = parseInt(process.env.PORT ?? '3000', 10);

app.get('/health', (req, res) => {
  res.json({ status: 'ok', pid: process.pid, node: process.version });
});

app.listen(port, () => {
  console.log(`Server running on port ${port}`);
});
EOF

# Add build script to package.json
npm pkg set scripts.build="tsc"
npm pkg set scripts.start="node dist/index.js"

Now create the Dockerfile:

cat > Dockerfile << 'EOF'
FROM node:20-slim AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci

FROM node:20-slim AS build
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build

FROM node:20-slim AS production
WORKDIR /app
ENV NODE_ENV=production
RUN groupadd --gid 1001 nodejs && \
    useradd --uid 1001 --gid nodejs --shell /bin/bash --create-home nodejs
COPY package.json package-lock.json ./
RUN npm ci --omit=dev && npm cache clean --force
COPY --from=build /app/dist ./dist
USER nodejs
EXPOSE 3000
HEALTHCHECK --interval=15s --timeout=5s --start-period=10s --retries=3 \
  CMD node -e "require('http').get('http://localhost:3000/health', r => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))"
CMD ["node", "dist/index.js"]
EOF

cat > .dockerignore << 'EOF'
node_modules/
dist/
.env
.git/
*.md
EOF

# Build the image
docker build --target production -t docker-practice:latest .

# Check the image size
docker image inspect docker-practice:latest --format='{{.Size}}' | numfmt --to=iec

# Run it
docker run -p 3000:3000 --name docker-practice -d docker-practice:latest

# Test the health endpoint
curl http://localhost:3000/health

# Check health status
docker inspect --format='{{.State.Health.Status}}' docker-practice

# Clean up
docker stop docker-practice && docker rm docker-practice

Try changing the base image to node:20-alpine and compare sizes.


Quick Reference

# Multi-stage Node.js Dockerfile skeleton
FROM node:20-slim AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci

FROM node:20-slim AS build
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build

FROM node:20-slim AS production
WORKDIR /app
ENV NODE_ENV=production
RUN groupadd --gid 1001 nodejs && \
    useradd --uid 1001 --gid nodejs --shell /bin/bash --create-home nodejs
COPY package.json package-lock.json ./
RUN npm ci --omit=dev && npm cache clean --force
COPY --from=build /app/dist ./dist
USER nodejs
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=10s --start-period=30s --retries=3 \
  CMD node -e "require('http').get('http://localhost:3000/health', r => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))"
CMD ["node", "dist/main.js"]
# Build commands
docker build .                              # Build default target
docker build --target production .          # Build specific stage
docker build --no-cache .                   # Ignore all cached layers
DOCKER_BUILDKIT=1 docker build .            # Enable BuildKit

# Run commands
docker run -p 3000:3000 -d my-image        # Run detached
docker run --rm -it my-image sh            # Interactive shell
docker exec -it container-name sh          # Shell in running container

# Docker Compose
docker compose up -d                       # Start all services
docker compose up -d --build              # Rebuild images then start
docker compose logs -f api                 # Follow logs
docker compose exec api sh                 # Shell in service
docker compose down                        # Stop and remove containers
docker compose down -v                     # Also remove volumes

# Inspect
docker image ls                            # List images
docker image inspect my-image             # Image details
docker container ls                        # Running containers
docker stats                              # Live resource usage

Further Reading

Monorepo Tooling: Turborepo and pnpm Workspaces

For .NET engineers who know: .NET Solutions (.sln), project references, MSBuild, multi-project builds, NuGet package sharing between projects You’ll learn: How pnpm workspaces and Turborepo replace the .NET solution model for JavaScript/TypeScript monorepos, with faster builds through intelligent caching Time: 15-20 minutes


The .NET Way (What You Already Know)

In .NET, a solution file (.sln) groups related projects. You have:

MySolution.sln
  MyApp.Web/            # ASP.NET Core app
  MyApp.Api/            # Another API project
  MyApp.Shared/         # Shared class library
  MyApp.Tests/          # Test project referencing the above

# MyApp.Web.csproj references MyApp.Shared.csproj:
<ProjectReference Include="..\MyApp.Shared\MyApp.Shared.csproj" />

MSBuild understands the dependency graph. When you build the solution, it builds MyApp.Shared first (because others depend on it), then builds everything else in parallel where possible.

The JavaScript ecosystem evolved a similar pattern: workspaces replace project references, and Turborepo replaces MSBuild’s orchestration — but with remote caching that MSBuild doesn’t have.


The JS/TS Monorepo Way

The Problem Monorepos Solve

Without a monorepo, you might have separate repos for your frontend, API, and shared types. Sharing types between them requires:

  • Publishing the shared types package to npm (even privately)
  • Bumping versions when types change
  • Remembering to install the updated package in dependent repos
  • Out-of-sync type definitions causing runtime bugs

A monorepo puts all packages in one repo. Shared types are referenced directly — no publishing required. A type change in packages/types is immediately visible to apps/web and apps/api without any version bumps.


pnpm Workspaces

pnpm is our package manager of choice (see article 1.3). Its workspace feature is the foundation of our monorepo setup.

Workspace configuration (pnpm-workspace.yaml at repo root):

# pnpm-workspace.yaml
packages:
  - 'apps/*'           # All directories under apps/
  - 'packages/*'       # All directories under packages/

This tells pnpm: “treat every directory under apps/ and packages/ as a workspace package.” Each directory needs its own package.json.

Root package.json:

{
  "name": "my-monorepo",
  "private": true,
  "version": "0.0.0",
  "engines": {
    "node": ">=20",
    "pnpm": ">=9"
  },
  "scripts": {
    "build": "turbo run build",
    "dev": "turbo run dev --parallel",
    "lint": "turbo run lint",
    "typecheck": "turbo run typecheck",
    "test": "turbo run test",
    "clean": "turbo run clean && rm -rf node_modules"
  },
  "devDependencies": {
    "turbo": "^2.0.0",
    "typescript": "^5.4.0"
  }
}

Workspace Package Structure

my-monorepo/
  pnpm-workspace.yaml
  package.json              # Root — workspace config, Turbo scripts
  turbo.json                # Turborepo configuration
  tsconfig.base.json        # Shared TypeScript config
  .gitignore
  .npmrc                    # pnpm settings

  apps/
    web/                    # Next.js frontend
      package.json
      next.config.js
      src/
    api/                    # NestJS backend
      package.json
      nest-cli.json
      src/

  packages/
    types/                  # Shared TypeScript types/interfaces
      package.json
      src/
    utils/                  # Shared utility functions
      package.json
      src/
    ui/                     # Shared React component library
      package.json
      src/
    config/                 # Shared config (ESLint, TypeScript, etc.)
      package.json
      eslint-preset.js
      tsconfig/

Defining Package Names

Each workspace package has a package.json with a name field. By convention, we namespace them:

// packages/types/package.json
{
  "name": "@myapp/types",
  "version": "0.0.1",
  "private": true,
  "main": "./dist/index.js",
  "types": "./dist/index.d.ts",
  "exports": {
    ".": {
      "types": "./dist/index.d.ts",
      "import": "./dist/index.js",
      "require": "./dist/index.cjs"
    }
  },
  "scripts": {
    "build": "tsup src/index.ts --format cjs,esm --dts",
    "dev": "tsup src/index.ts --format cjs,esm --dts --watch",
    "typecheck": "tsc --noEmit",
    "lint": "eslint src/"
  },
  "devDependencies": {
    "@myapp/config": "workspace:*",
    "tsup": "^8.0.0"
  }
}
// apps/web/package.json
{
  "name": "@myapp/web",
  "version": "0.0.1",
  "private": true,
  "dependencies": {
    "@myapp/types": "workspace:*",
    "@myapp/utils": "workspace:*",
    "@myapp/ui": "workspace:*",
    "next": "^14.0.0",
    "react": "^18.0.0"
  }
}

The workspace:* protocol tells pnpm: “use the local version of this package, not the npm registry version.” This is equivalent to a <ProjectReference> in .NET.


Shared TypeScript Configuration

Rather than duplicating tsconfig.json across every package, extend a base config:

// tsconfig.base.json (root)
{
  "compilerOptions": {
    "target": "ES2022",
    "lib": ["ES2022"],
    "module": "ESNext",
    "moduleResolution": "Bundler",
    "strict": true,
    "exactOptionalPropertyTypes": true,
    "noUncheckedIndexedAccess": true,
    "noImplicitReturns": true,
    "skipLibCheck": true,
    "declaration": true,
    "declarationMap": true,
    "sourceMap": true
  }
}
// packages/types/tsconfig.json
{
  "extends": "../../tsconfig.base.json",
  "compilerOptions": {
    "outDir": "./dist",
    "rootDir": "./src"
  },
  "include": ["src/**/*"]
}
// apps/web/tsconfig.json
{
  "extends": "../../tsconfig.base.json",
  "compilerOptions": {
    "jsx": "preserve",
    "lib": ["ES2022", "DOM", "DOM.Iterable"],
    "plugins": [{ "name": "next" }]
  },
  "include": ["src/**/*", ".next/types/**/*.ts"]
}

Turborepo

pnpm workspaces handle dependency linking. Turborepo handles build orchestration and caching — the equivalent of MSBuild’s dependency-aware parallel build, but smarter.

turbo.json — the equivalent of MSBuild’s build graph:

{
  "$schema": "https://turbo.build/schema.json",
  "ui": "tui",
  "tasks": {
    "build": {
      "dependsOn": ["^build"],
      "inputs": ["src/**/*.ts", "src/**/*.tsx", "package.json", "tsconfig.json"],
      "outputs": ["dist/**", ".next/**", "!.next/cache/**"]
    },
    "dev": {
      "dependsOn": ["^build"],
      "persistent": true,
      "cache": false
    },
    "typecheck": {
      "dependsOn": ["^build"],
      "inputs": ["src/**/*.ts", "src/**/*.tsx", "tsconfig.json"]
    },
    "lint": {
      "inputs": ["src/**/*.ts", "src/**/*.tsx", ".eslintrc*"]
    },
    "test": {
      "dependsOn": ["^build"],
      "inputs": ["src/**/*.ts", "src/**/*.tsx", "**/*.test.ts"],
      "outputs": ["coverage/**"]
    },
    "clean": {
      "cache": false
    }
  }
}

Key concepts:

"dependsOn": ["^build"] — the ^ prefix means “build all dependencies first.” This is like <ProjectReference> forcing the referenced project to build before the referencing one. Without this, apps/web might try to build before packages/types has generated its dist/.

"inputs" — files that, when changed, invalidate the cache. If none of the input files changed, Turbo replays the cached result instantly.

"outputs" — files that are cached and restored. If the cache hits, these are restored without re-running the build.

"persistent": true — marks long-running tasks (like dev servers) that never complete. Turbo won’t wait for them to finish.

"cache": false — never cache this task (used for deploy tasks, clean, etc.)


How the Cache Works

This is where Turborepo beats MSBuild significantly. Turbo computes a hash of:

  • All input files (inputs config)
  • The task’s environment variables
  • The turbo.json configuration

If the hash matches a previous run, the task is skipped and its output is restored from cache — instantly. This works locally (.turbo/ directory) and remotely (Vercel Remote Cache or self-hosted).

# First run: all tasks execute
turbo run build
# Tasks: @myapp/types:build, @myapp/utils:build, @myapp/web:build, @myapp/api:build
# Time: 45s

# Second run with no changes: everything from cache
turbo run build
# Tasks: @myapp/types:build [CACHED], @myapp/utils:build [CACHED], ...
# Time: 0.4s

# Change only packages/types/src/user.ts
# Turbo detects which packages are affected by the change:
turbo run build
# Tasks: @myapp/types:build (changed), @myapp/web:build (depends on types), @myapp/api:build
# @myapp/utils:build [CACHED] (doesn't depend on types)
# Time: 12s

This is the “build only what changed” behavior .NET engineers know from incremental builds — but applied across packages in the repo and sharable across your entire team.


Remote Caching

Remote cache means a CI run’s results are available to other CI runs and to local machines. Once one CI machine builds @myapp/utils, no other CI machine (or developer’s laptop) needs to rebuild it unless the inputs change.

# Link to Vercel Remote Cache (free for Turborepo users)
npx turbo login
npx turbo link

# Or self-hosted with Turborepo API Server or Ducktape
# Set TURBO_API, TURBO_TOKEN, TURBO_TEAM environment variables

In GitHub Actions:

- name: Build
  run: turbo run build
  env:
    TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
    TURBO_TEAM: ${{ vars.TURBO_TEAM }}

Running Scripts Across Packages

# Run "build" in all packages (respecting dependency order)
pnpm run build                        # delegates to turbo via root package.json

# Run a script in a specific package
pnpm --filter @myapp/types build
pnpm --filter @myapp/web dev

# Run in all packages matching a pattern
pnpm --filter "@myapp/*" build

# Run in a package and all its dependencies
pnpm --filter @myapp/web... build

# Run in all packages that depend on a changed package
pnpm --filter "...[origin/main]" build

# Add a dependency to a specific package
pnpm --filter @myapp/web add react-query

# Add a shared devDependency to the root
pnpm add -Dw typescript

# Add a workspace package as a dependency
pnpm --filter @myapp/web add @myapp/types
# (pnpm automatically uses workspace:* protocol)

Complete Monorepo Template

Full directory structure with file contents:

# Bootstrap command
mkdir my-monorepo && cd my-monorepo
git init

# Root package.json
cat > package.json << 'EOF'
{
  "name": "my-monorepo",
  "private": true,
  "version": "0.0.0",
  "scripts": {
    "build": "turbo run build",
    "dev": "turbo run dev --parallel",
    "lint": "turbo run lint",
    "typecheck": "turbo run typecheck",
    "test": "turbo run test",
    "clean": "turbo run clean && rm -rf node_modules"
  },
  "devDependencies": {
    "turbo": "^2.0.0",
    "typescript": "^5.4.0"
  },
  "engines": {
    "node": ">=20",
    "pnpm": ">=9"
  },
  "packageManager": "pnpm@9.0.0"
}
EOF

# pnpm workspace config
cat > pnpm-workspace.yaml << 'EOF'
packages:
  - 'apps/*'
  - 'packages/*'
EOF

# Turborepo config
cat > turbo.json << 'EOF'
{
  "$schema": "https://turbo.build/schema.json",
  "ui": "tui",
  "tasks": {
    "build": {
      "dependsOn": ["^build"],
      "inputs": ["src/**/*.ts", "src/**/*.tsx", "package.json", "tsconfig.json"],
      "outputs": ["dist/**"]
    },
    "dev": {
      "dependsOn": ["^build"],
      "persistent": true,
      "cache": false
    },
    "typecheck": {
      "dependsOn": ["^build"],
      "inputs": ["src/**/*.ts", "src/**/*.tsx", "tsconfig.json"]
    },
    "lint": {
      "inputs": ["src/**/*.ts", "src/**/*.tsx", ".eslintrc*"]
    },
    "test": {
      "inputs": ["src/**", "**/*.test.ts"],
      "outputs": ["coverage/**"]
    },
    "clean": {
      "cache": false
    }
  }
}
EOF

# Shared types package
mkdir -p packages/types/src
cat > packages/types/package.json << 'EOF'
{
  "name": "@myapp/types",
  "version": "0.0.1",
  "private": true,
  "main": "./dist/index.js",
  "types": "./dist/index.d.ts",
  "exports": {
    ".": {
      "types": "./dist/index.d.ts",
      "default": "./dist/index.js"
    }
  },
  "scripts": {
    "build": "tsup src/index.ts --format cjs --dts --clean",
    "dev": "tsup src/index.ts --format cjs --dts --watch",
    "typecheck": "tsc --noEmit",
    "lint": "echo 'lint ok'",
    "clean": "rm -rf dist"
  },
  "devDependencies": {
    "tsup": "^8.0.0",
    "typescript": "^5.4.0"
  }
}
EOF

cat > packages/types/src/index.ts << 'EOF'
export interface User {
  id: string;
  email: string;
  name: string;
  createdAt: Date;
}

export interface ApiResponse<T> {
  data: T;
  message?: string;
}

export interface PaginatedResponse<T> extends ApiResponse<T[]> {
  total: number;
  page: number;
  pageSize: number;
}
EOF

cat > packages/types/tsconfig.json << 'EOF'
{
  "extends": "../../tsconfig.base.json",
  "compilerOptions": {
    "outDir": "./dist",
    "rootDir": "./src"
  },
  "include": ["src/**/*"]
}
EOF

# Shared utils package
mkdir -p packages/utils/src
cat > packages/utils/package.json << 'EOF'
{
  "name": "@myapp/utils",
  "version": "0.0.1",
  "private": true,
  "main": "./dist/index.js",
  "types": "./dist/index.d.ts",
  "exports": {
    ".": {
      "types": "./dist/index.d.ts",
      "default": "./dist/index.js"
    }
  },
  "scripts": {
    "build": "tsup src/index.ts --format cjs --dts --clean",
    "dev": "tsup src/index.ts --format cjs --dts --watch",
    "typecheck": "tsc --noEmit",
    "lint": "echo 'lint ok'",
    "clean": "rm -rf dist"
  },
  "dependencies": {
    "@myapp/types": "workspace:*"
  },
  "devDependencies": {
    "tsup": "^8.0.0",
    "typescript": "^5.4.0"
  }
}
EOF

cat > packages/utils/src/index.ts << 'EOF'
import type { PaginatedResponse } from '@myapp/types';

export function paginate<T>(
  items: T[],
  page: number,
  pageSize: number
): PaginatedResponse<T> {
  const start = (page - 1) * pageSize;
  return {
    data: items.slice(start, start + pageSize),
    total: items.length,
    page,
    pageSize,
  };
}

export function sleep(ms: number): Promise<void> {
  return new Promise((resolve) => setTimeout(resolve, ms));
}

export function invariant(
  condition: unknown,
  message: string
): asserts condition {
  if (!condition) throw new Error(message);
}
EOF

# Base tsconfig
cat > tsconfig.base.json << 'EOF'
{
  "compilerOptions": {
    "target": "ES2022",
    "lib": ["ES2022"],
    "module": "CommonJS",
    "moduleResolution": "Node",
    "strict": true,
    "skipLibCheck": true,
    "declaration": true,
    "declarationMap": true,
    "sourceMap": true
  }
}
EOF

# .gitignore
cat > .gitignore << 'EOF'
node_modules/
dist/
.next/
.turbo/
coverage/
*.tsbuildinfo
.env
.env.*
!.env.example
EOF

# Install everything
pnpm install

# Build all packages
pnpm run build

Shared ESLint Configuration

Put shared lint config in a package:

// packages/config/eslint-preset.js
module.exports = {
  extends: [
    'eslint:recommended',
    'plugin:@typescript-eslint/recommended',
    'plugin:@typescript-eslint/recommended-requiring-type-checking',
  ],
  plugins: ['@typescript-eslint'],
  parser: '@typescript-eslint/parser',
  rules: {
    '@typescript-eslint/no-unused-vars': ['error', { argsIgnorePattern: '^_' }],
    '@typescript-eslint/no-explicit-any': 'error',
    'no-console': ['warn', { allow: ['warn', 'error'] }],
  },
};
// packages/config/package.json
{
  "name": "@myapp/config",
  "version": "0.0.1",
  "private": true,
  "main": "eslint-preset.js",
  "dependencies": {
    "@typescript-eslint/eslint-plugin": "^7.0.0",
    "@typescript-eslint/parser": "^7.0.0",
    "eslint": "^8.0.0"
  }
}
// apps/api/.eslintrc.json
{
  "extends": ["@myapp/config"],
  "parserOptions": {
    "project": "./tsconfig.json"
  }
}

Filtering Builds in CI

Turborepo can detect which packages changed relative to a base branch and only run tasks for affected packages:

# .github/workflows/ci.yml
- name: Build affected packages
  run: pnpm turbo run build --filter="...[origin/main]"
  # ^ Only builds packages that changed from main, plus their dependents

This is equivalent to Azure Pipelines path-based triggers, but smarter: if you change packages/types, Turbo knows to rebuild apps/web and apps/api (because they depend on types) but not packages/utils (if it doesn’t depend on types).


Key Differences

.NET ConceptJS Monorepo EquivalentNotes
.sln filepnpm-workspace.yamlDefines which projects are in the workspace
.csprojpackage.jsonEach package’s manifest
<ProjectReference>"@myapp/types": "workspace:*"Local package dependency
MSBuild dependency graphturbo.json dependsOnDefines build order
MSBuild incremental buildTurborepo local cacheHash-based, per-task
No MSBuild equivalentTurborepo remote cacheShared cache across machines/CI
NuGet packagePublished npm packageFor external sharing; internal use workspace:*
Shared class librarypackages/ workspaceTypes, utils, UI components
dotnet buildpnpm run build (→ turbo run build)Runs all tasks in dependency order
dotnet testpnpm run test (→ turbo run test)Same
Solution-wide restorepnpm installInstalls all workspace deps at once

Gotchas for .NET Engineers

Gotcha 1: workspace:* does not auto-build the dependency. Adding "@myapp/types": "workspace:*" to your app’s dependencies symlinks the package, but it does NOT build it. If packages/types hasn’t been built (no dist/ directory), importing from it will fail. The "dependsOn": ["^build"] in turbo.json handles this for Turbo commands — but if you run pnpm --filter @myapp/web dev directly without Turbo, you may get import errors until you manually build the dependencies first.

Gotcha 2: TypeScript paths and exports must match. When apps/web imports from '@myapp/types', TypeScript resolves this through the package’s types field in package.json (pointing to dist/index.d.ts). If the dist/ isn’t there (not built yet) or if the exports field doesn’t include types, TypeScript will report Cannot find module '@myapp/types'. This is not a missing package — it’s a missing build artifact.

Gotcha 3: Changing turbo.json invalidates the entire cache. Any change to turbo.json causes Turborepo to consider all existing cache entries invalid, since the task definition itself changed. Similarly, changing tsconfig.base.json invalidates the cache for all TypeScript builds if it’s listed in the inputs. This is correct behavior but can surprise you when a small config tweak triggers a full rebuild.

Gotcha 4: pnpm install must be run from the repo root. Running pnpm install inside a package directory (e.g., cd packages/types && pnpm install) creates a separate node_modules inside that package and breaks the workspace symlinks. Always run pnpm install from the monorepo root. If you accidentally do this, delete the nested node_modules and run pnpm install from root again.

Gotcha 5: Circular dependencies between packages will cause hard-to-debug errors. If packages/utils depends on packages/types, and you later add a dependency from packages/types to packages/utils, Turbo will detect the circular dependency and refuse to build. This is the equivalent of circular project references in .NET solutions, which the compiler rejects. Design your package dependency graph as a DAG (directed acyclic graph): apps depend on packages, and packages can depend on other packages but not apps.

Gotcha 6: turbo.json outputs must include all generated files, or caching breaks. If your build generates files that aren’t listed in outputs, those files won’t be restored from cache on a cache hit. The task appears to succeed (it returns the cached result) but your dist/ is missing files. Always enumerate all generated artifacts in outputs. Use ! exclusions for large build caches you don’t want to store: "outputs": [".next/**", "!.next/cache/**"].


Hands-On Exercise

Set up a minimal monorepo with a shared types package consumed by two apps:

mkdir turbo-practice && cd turbo-practice

# Install pnpm if needed
npm install -g pnpm

# Initialize
cat > pnpm-workspace.yaml << 'EOF'
packages:
  - 'apps/*'
  - 'packages/*'
EOF

cat > package.json << 'EOF'
{
  "name": "turbo-practice",
  "private": true,
  "scripts": {
    "build": "turbo run build",
    "dev": "turbo run dev --parallel",
    "typecheck": "turbo run typecheck"
  },
  "devDependencies": {
    "turbo": "latest",
    "typescript": "^5.4.0"
  }
}
EOF

cat > turbo.json << 'EOF'
{
  "$schema": "https://turbo.build/schema.json",
  "tasks": {
    "build": {
      "dependsOn": ["^build"],
      "outputs": ["dist/**"]
    },
    "typecheck": {
      "dependsOn": ["^build"]
    }
  }
}
EOF

cat > tsconfig.base.json << 'EOF'
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "CommonJS",
    "moduleResolution": "Node",
    "strict": true,
    "declaration": true,
    "skipLibCheck": true
  }
}
EOF

# Create shared types package
mkdir -p packages/types/src
cat > packages/types/package.json << 'EOF'
{
  "name": "@practice/types",
  "version": "0.0.1",
  "private": true,
  "main": "./dist/index.js",
  "types": "./dist/index.d.ts",
  "scripts": {
    "build": "tsc",
    "typecheck": "tsc --noEmit"
  },
  "devDependencies": {
    "typescript": "^5.4.0"
  }
}
EOF

cat > packages/types/tsconfig.json << 'EOF'
{
  "extends": "../../tsconfig.base.json",
  "compilerOptions": { "outDir": "dist", "rootDir": "src" },
  "include": ["src"]
}
EOF

cat > packages/types/src/index.ts << 'EOF'
export interface Greeting {
  message: string;
  timestamp: Date;
}
EOF

# Create app-a
mkdir -p apps/app-a/src
cat > apps/app-a/package.json << 'EOF'
{
  "name": "@practice/app-a",
  "version": "0.0.1",
  "private": true,
  "scripts": {
    "build": "tsc",
    "typecheck": "tsc --noEmit"
  },
  "dependencies": {
    "@practice/types": "workspace:*"
  },
  "devDependencies": {
    "typescript": "^5.4.0"
  }
}
EOF

cat > apps/app-a/tsconfig.json << 'EOF'
{
  "extends": "../../tsconfig.base.json",
  "compilerOptions": { "outDir": "dist", "rootDir": "src" },
  "include": ["src"]
}
EOF

cat > apps/app-a/src/index.ts << 'EOF'
import type { Greeting } from '@practice/types';

const greeting: Greeting = {
  message: 'Hello from app-a',
  timestamp: new Date(),
};

console.log(greeting.message);
EOF

# Install everything
pnpm install

# Build all (Turbo builds packages/types first, then apps/app-a)
pnpm run build

# Notice the output — types built before app-a
# Now run again — everything is cached
pnpm run build

# Modify types/src/index.ts and build again
# Only types and app-a rebuild; any uncached packages would rebuild too
echo 'export type GreetingType = "formal" | "casual";' >> packages/types/src/index.ts
pnpm run build
# app-a rebuilds because its dependency changed

Quick Reference

# Install all workspace dependencies
pnpm install

# Add dependency to specific package
pnpm --filter @myapp/web add react-query
pnpm --filter @myapp/web add @myapp/types   # workspace:* auto-set

# Add devDependency to root (shared tooling)
pnpm add -Dw typescript turbo

# Run script in all packages
pnpm run build              # Via turbo (respects dependency order + cache)

# Run in specific package
pnpm --filter @myapp/web dev

# Run in package + all its dependencies
pnpm --filter "@myapp/web..." build

# Run for all changed packages (CI)
turbo run build --filter="...[origin/main]"

# Inspect workspace
pnpm list --depth 0         # All workspace packages
pnpm why some-package       # Why is this installed?

# Clear turbo cache
turbo run build --force     # Bypass cache for this run
rm -rf .turbo               # Delete all local cache
// turbo.json task patterns
{
  "tasks": {
    "build": {
      "dependsOn": ["^build"],     // ^ = build dependencies first
      "inputs": ["src/**/*.ts"],   // Cache key inputs
      "outputs": ["dist/**"]       // Files to cache and restore
    },
    "dev": {
      "persistent": true,          // Long-running, never "done"
      "cache": false               // Never cache
    },
    "test": {
      "dependsOn": ["build"],      // No ^ = depend on same package's build
      "outputs": ["coverage/**"]
    }
  }
}
# pnpm-workspace.yaml
packages:
  - 'apps/*'
  - 'packages/*'
  - 'tools/*'

Further Reading

6.6 — Preview Environments & Branch Deployments

For .NET engineers who know: Azure DevOps deployment slots, stage/UAT environments, and release pipelines with manual approval gates You’ll learn: How modern hosting platforms auto-provision isolated environments per pull request, and how to wire that into your QA workflow Time: 10-15 min read


The .NET Way (What You Already Know)

In the Azure world, environment promotion is a deliberate, configuration-heavy process. You create named environments in Azure DevOps (Development, Staging, Production), configure deployment slots in App Service, and wire up a release pipeline that promotes artifacts through gates. Staging slots share the same App Service Plan as production, which keeps them running, but spinning up a completely isolated environment for a single feature branch means provisioning another App Service, another database connection string, and another DNS entry — work typically done by hand or through ARM templates. The process is heavyweight enough that most teams skip branch-level environments entirely and rely on a shared QA slot that everyone deploys to in sequence.

The consequence is familiar: two developers are both testing features, one overwrites the shared QA environment, and someone ends up testing against the wrong code. “QA is broken” becomes a weekly conversation.


The Modern Hosting Way

Render and Vercel treat environment provisioning as a first-class, zero-configuration feature. Every pull request automatically gets its own deployed environment — its own URL, its own environment variables, its own isolated existence — and that environment is destroyed when the PR closes. No manual provisioning. No shared slots. No “QA is broken because Alex deployed his branch.”

The mental model shift: environments are cheap and ephemeral. Create one per branch, tear it down when the branch merges. QA happens on isolated previews, not on a shared staging slot.

Render Preview Environments

Render calls this feature Preview Environments. You enable it at the service level in the Render dashboard, and from that point on, every new branch pushed to GitHub with an open PR gets a separate deployment.

Enabling a preview environment:

In the Render dashboard, navigate to your service, open Settings, and toggle on Preview Environments. Render will show a preview environment tab listing all active branch deployments.

Your render.yaml — the Render Blueprint spec:

# render.yaml — infrastructure as code for Render
services:
  - type: web
    name: api
    runtime: node
    buildCommand: pnpm install && pnpm build
    startCommand: node dist/main.js
    envVars:
      - key: NODE_ENV
        value: production
      - key: DATABASE_URL
        fromDatabase:
          name: postgres-db
          property: connectionString
    previewsEnabled: true        # enables preview environments per PR
    previewsExpireAfterDays: 7   # auto-cleanup after 7 days

databases:
  - name: postgres-db
    databaseName: myapp
    user: myapp
    previewPlan: starter         # preview DBs use a cheaper plan

When previewsEnabled: true is set, Render provisions a separate database instance for each preview environment, running the schema migration on first deploy. This is the critical difference from Azure slots, which share a database by default.

Environment variables in preview environments:

Preview environments inherit the parent service’s environment variables by default. You can override specific variables for preview contexts:

envVars:
  - key: STRIPE_SECRET_KEY
    sync: false          # prompts for manual entry per environment
  - key: EMAIL_PROVIDER
    value: preview-stub  # override for previews — no real emails sent
  - key: LOG_LEVEL
    value: debug         # more verbose in preview

For sensitive keys (payment processors, external APIs), use sync: false to prevent preview environments from inheriting production credentials. Render prompts you to set these manually the first time a preview spins up.

The preview URL pattern:

Render generates URLs in the form https://api-pr-42-abc123.onrender.com. You can find the URL in the Render dashboard under the PR’s preview environment tab, and Render also posts it as a GitHub Deployment status, which appears directly on the PR.

Vercel Preview Deployments

If the frontend is on Vercel (Next.js), previews are on by default with zero configuration. Every push to any branch creates a deployment. The URL appears as a GitHub status check on the commit and PR.

# Vercel CLI — promote a preview to production manually
vercel promote https://myapp-abc123.vercel.app --token=$VERCEL_TOKEN

Branch-specific environment variables in Vercel:

Vercel lets you scope environment variables to environment type (Production, Preview, Development) and optionally to specific branches:

# Set an env var only for preview environments
vercel env add API_BASE_URL preview
# When prompted: https://api-preview.example.com

# Set an env var only for the `staging` branch
vercel env add FEATURE_FLAG_EXPERIMENTAL preview staging

In the Vercel dashboard: Settings > Environment Variables > add variable, then use the “Branch” field to scope it.


PR-Based Deployments in CI/CD

The GitHub Actions workflow that connects your PR to the preview environment:

# .github/workflows/preview.yml
name: Preview Deploy

on:
  pull_request:
    types: [opened, synchronize, reopened]

jobs:
  deploy-preview:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup pnpm
        uses: pnpm/action-setup@v3
        with:
          version: 9

      - name: Install dependencies
        run: pnpm install --frozen-lockfile

      - name: Run tests
        run: pnpm test

      - name: Deploy to Render Preview
        # Render auto-deploys on push to the PR branch.
        # This step posts the preview URL as a PR comment.
        uses: render-oss/render-deploy-action@v1
        with:
          api-key: ${{ secrets.RENDER_API_KEY }}
          service-id: ${{ secrets.RENDER_SERVICE_ID }}

      - name: Comment PR with preview URL
        uses: actions/github-script@v7
        with:
          script: |
            const previewUrl = process.env.RENDER_PREVIEW_URL;
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: `Preview environment ready: ${previewUrl}\n\nRuns against isolated database seeded from fixtures.`
            });

Our Preview Environment Workflow

The workflow this team follows:

  1. Engineer opens a PR. Render auto-deploys the branch within 3-5 minutes.
  2. GitHub posts the preview URL as a deployment status on the PR. The reviewer clicks the URL directly from GitHub.
  3. QA reviews on the preview URL. No access to staging needed.
  4. For features touching payments or external APIs, the preview uses stub/sandbox credentials scoped to that environment.
  5. PR merges. Render destroys the preview environment automatically (after previewsExpireAfterDays if set, or immediately on merge).
  6. Main branch deploys to production.

Database handling in preview environments:

Preview databases are provisioned from Render’s starter tier (lower cost), migrated on first deploy, and seeded via a seed script:

// package.json
{
  "scripts": {
    "db:seed:preview": "tsx prisma/seed-preview.ts"
  }
}
// prisma/seed-preview.ts — deterministic test data for QA
import { PrismaClient } from '@prisma/client';

const prisma = new PrismaClient();

async function main() {
  // Only runs in non-production environments
  if (process.env.NODE_ENV === 'production') {
    throw new Error('Never run preview seed against production');
  }

  await prisma.user.createMany({
    data: [
      { email: 'admin@preview.test', role: 'ADMIN' },
      { email: 'user@preview.test', role: 'USER' },
    ],
    skipDuplicates: true,
  });

  console.log('Preview seed complete');
}

main()
  .catch(console.error)
  .finally(() => prisma.$disconnect());

Add the seed script to your Render build command for preview environments:

# render.yaml
services:
  - type: web
    name: api
    buildCommand: pnpm install && pnpm build && pnpm db:migrate && pnpm db:seed:preview
    previewsEnabled: true

Key Differences

Azure DevOps / App ServiceRender / Vercel
Environments are provisioned manually or via ARM/BicepEnvironments auto-provision per PR, zero config
Deployment slots share the parent App Service PlanPreview environments are fully isolated services
Slots typically share the production database (with feature flags to isolate data)Render provisions a separate database per preview
Environment variables configured per slot in portalEnv vars inherited from parent, overridable per branch
Preview URL requires manual setup (Traffic Manager, CNAME)Preview URL is auto-generated and posted to the PR
Slot is persistent until manually deletedPreview is destroyed when the PR closes
Cost: slot runs continuously on shared planCost: preview databases have a small per-instance cost

Gotchas for .NET Engineers

1. Preview databases are real databases that cost money. Each preview environment on Render with previewPlan: starter incurs a small charge. If your team opens 10 PRs simultaneously and forgets to configure previewsExpireAfterDays, you accumulate idle database instances. Set the expiry, audit periodically with render list services, and close PRs promptly when abandoned.

2. Schema migrations run against the preview database on every deploy — including destructive ones. Render runs your build command (which includes prisma migrate deploy) on every push to the PR branch. If you push a migration that drops a column and then push a fix 10 minutes later, the column is gone. Preview databases are ephemeral, but mid-review destructive migrations will confuse QA reviewers who are mid-session. Use additive migrations for in-progress features (add the new column first, remove the old one in a later PR).

3. Environment variable inheritance catches people expecting isolation. Preview environments inherit all environment variables from the parent service unless you explicitly override them. If your parent service has STRIPE_SECRET_KEY set to the production key and you do not use sync: false for previews, your preview environment will attempt real Stripe charges. Always audit which credentials are inherited and whether that is safe for a PR environment a reviewer will click through.

4. The preview URL changes on every new Render deployment. Unlike Azure deployment slots, which have a stable .azurewebsites.net URL, Render preview URLs may rotate when the service is redeployed from scratch (e.g., after a configuration change). Do not embed preview URLs in external systems or test suites. Always retrieve the current URL from the Render dashboard or the GitHub Deployment status.

5. Vercel preview deployments do not share a backend. When the frontend is on Vercel and the backend is on Render, your Vercel preview deployment needs to point to the correct Render preview API URL. This requires the NEXT_PUBLIC_API_URL env var on the Vercel preview to be updated to match the Render preview URL. This is not automatic — you need a GitHub Actions step that queries the Render API for the preview URL and updates the Vercel environment variable, or a simpler approach: use a stable preview subdomain on your custom domain that always routes to the latest preview build.


Hands-On Exercise

Set up a complete preview environment workflow for a minimal NestJS + Next.js project.

Step 1: Add render.yaml to your repository root with previewsEnabled: true and a previewPlan: starter database.

Step 2: Create a prisma/seed-preview.ts that inserts 3 deterministic users and 5 test records. Add a guard at the top that throws if NODE_ENV === 'production'.

Step 3: Add the seed to the build command in render.yaml for preview contexts.

Step 4: Open a test PR against your repository. Verify that:

  • Render creates a new service instance visible in the dashboard
  • The GitHub PR shows a deployment status with the preview URL
  • Hitting the preview URL returns a 200 from the API health endpoint
  • The seed data is present (query /api/users and confirm your test users appear)

Step 5: Close the PR without merging. Confirm the preview environment is destroyed (check the Render dashboard after ~2 minutes).

Step 6: Add a GitHub Actions job that posts the preview URL as a PR comment. Test it by opening another PR.


Quick Reference

# View all active Render services (including previews)
render services list

# Trigger a manual deploy on Render (useful for debugging)
render deploys create --service-id srv-xxxx

# Check Vercel preview deployments for a project
vercel ls --token=$VERCEL_TOKEN

# Promote a specific Vercel preview to production
vercel promote <deployment-url> --token=$VERCEL_TOKEN

# List environment variables on a Vercel project (preview scope)
vercel env ls preview

# Add/update a Vercel env var for preview only
vercel env add MY_VAR preview

# Set Render env var via CLI
render env set MY_VAR=value --service-id srv-xxxx
# render.yaml — minimal preview environment config
services:
  - type: web
    name: api
    runtime: node
    buildCommand: pnpm install && pnpm build
    startCommand: node dist/main.js
    previewsEnabled: true
    previewsExpireAfterDays: 7
    envVars:
      - key: DATABASE_URL
        fromDatabase:
          name: db
          property: connectionString
      - key: STRIPE_SECRET_KEY
        sync: false              # must be set manually per preview env

databases:
  - name: db
    previewPlan: starter

Further Reading

6.7 — Domain Management & TLS: IIS to Modern Hosting

For .NET engineers who know: IIS Manager SSL certificate bindings, Azure App Service custom domains, and ASP.NET Core CORS middleware You’ll learn: How TLS, custom domains, and CORS work on Render — and why most of what IIS made manual is now zero-configuration Time: 10-15 min read


The .NET Way (What You Already Know)

In IIS, configuring HTTPS for a new site is a multi-step manual process. You generate or import a certificate (PFX from a CA, or a self-signed cert for dev), bind it to the site through IIS Manager under Site Bindings, set the SNI flag if you’re running multiple sites on the same IP, and configure an HTTP-to-HTTPS redirect either through URL Rewrite rules or an additional binding. Certificate renewal is a calendar event — you renew manually every one or two years, reimport the PFX, rebind, and restart the site. Forgetting triggers a production outage accompanied by browser certificate warnings.

In Azure App Service, the experience improved: you can upload a certificate or use App Service Managed Certificates (free Let’s Encrypt integration), point a CNAME from your registrar to the .azurewebsites.net domain, and verify ownership through a TXT record. Custom domain validation uses either a CNAME verification token or an A record. HTTPS is enforced with a single toggle in the portal. It is better than IIS but still requires navigating multiple portal blades.

CORS in ASP.NET Core is configured via middleware in Program.cs, with named policies that controllers or action methods reference via [EnableCors]:

// Program.cs — ASP.NET Core CORS
builder.Services.AddCors(options =>
{
    options.AddPolicy("AllowFrontend", policy =>
    {
        policy.WithOrigins("https://app.example.com")
              .AllowAnyHeader()
              .AllowAnyMethod()
              .AllowCredentials();
    });
});

app.UseCors("AllowFrontend");

The Modern Hosting Way

On Render, TLS is automatic and invisible. There is no certificate import, no binding dialog, no renewal reminder. Render provisions a Let’s Encrypt certificate for every custom domain the moment you add it, renews automatically before expiry, and enforces HTTPS-only by default. The operational overhead of TLS drops to approximately zero.

DNS Configuration for Render

When you add a custom domain to a Render service, the dashboard shows you exactly which DNS records to create:

For a root domain (example.com):

  • Type: A
  • Name: @ (or blank, depending on your registrar)
  • Value: 216.24.57.1 (Render’s load balancer IP — verify in your dashboard, as this can change)

Some registrars do not support ALIAS or ANAME records for root domains. In that case, use Cloudflare as your DNS provider (free tier), which supports CNAME flattening at the root. Point Cloudflare’s nameservers at your registrar, then create a CNAME pointing @ to your Render service URL.

For a subdomain (api.example.com):

  • Type: CNAME
  • Name: api
  • Value: your-service-name.onrender.com

DNS propagation typically takes 5-30 minutes, occasionally up to 24 hours for some registrars. Render’s dashboard shows a green checkmark when the domain resolves correctly and the TLS certificate is issued.

Automatic TLS

Render uses Let’s Encrypt under the hood. The lifecycle:

  1. You add a custom domain in the Render dashboard.
  2. Render validates domain ownership by checking that the DNS record points to Render’s infrastructure.
  3. Render issues a Let’s Encrypt certificate (usually within 1-2 minutes after DNS propagates).
  4. Render automatically renews the certificate 30 days before expiry.
  5. You never think about certificates again.

There is no manual intervention at any step. If the certificate fails to issue (usually because DNS has not propagated yet), Render retries automatically and shows the error state in the dashboard.

HTTP-to-HTTPS Redirects

Render enforces HTTPS at the load balancer level. HTTP requests are redirected to HTTPS automatically — no configuration needed in your NestJS or Next.js application code. Unlike IIS URL Rewrite rules or ASP.NET Core’s UseHttpsRedirection() middleware, there is nothing to configure.

You can still add UseHttpsRedirection() in NestJS if you want belt-and-suspenders behavior in development (it has no effect on Render since HTTP never reaches your application), but it is not necessary.

Custom Domains — Full Walkthrough

# Using the Render CLI to add a custom domain
render domains add api.example.com --service-id srv-xxxx

# List domains on a service
render domains list --service-id srv-xxxx

# Check TLS certificate status
render domains get api.example.com --service-id srv-xxxx

Alternatively, in the Render dashboard:

  1. Select your service
  2. Open the Custom Domains tab
  3. Click Add Custom Domain
  4. Enter the domain name
  5. Copy the DNS record Render provides
  6. Create the record in your DNS provider
  7. Click Verify — Render checks propagation and issues the certificate

Reverse Proxy Considerations

Render sits behind a load balancer, which means your NestJS application receives requests with the original client IP in the X-Forwarded-For header, not the TCP connection IP. Configure NestJS to trust the proxy:

// main.ts — trust proxy headers from Render's load balancer
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);

  // Tell Express (underlying NestJS) to trust the first proxy hop
  app.getHttpAdapter().getInstance().set('trust proxy', 1);

  await app.listen(process.env.PORT ?? 3000);
}

bootstrap();

Without this, req.ip returns the load balancer’s internal IP instead of the client IP. Rate limiting middleware, geo-restriction, and audit logging all depend on the correct client IP.

The X-Forwarded-Proto header tells your app whether the original request was HTTP or HTTPS. After setting trust proxy, Express reads this correctly, so req.secure returns true for HTTPS requests even though Render terminates TLS at the load balancer.


CORS in NestJS vs. ASP.NET Core

The conceptual model is identical — you declare allowed origins, methods, and headers, and the framework handles the preflight OPTIONS response. The implementation differs in syntax.

NestJS CORS setup in main.ts:

// main.ts — CORS configuration for NestJS
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);

  app.enableCors({
    // In production: specific origins only.
    // In development: allow localhost.
    origin: process.env.NODE_ENV === 'production'
      ? [
          'https://app.example.com',
          'https://www.example.com',
        ]
      : ['http://localhost:3001', 'http://localhost:5173'],

    methods: ['GET', 'POST', 'PUT', 'PATCH', 'DELETE', 'OPTIONS'],
    allowedHeaders: ['Content-Type', 'Authorization', 'X-Request-ID'],
    credentials: true,        // required if sending cookies or Authorization headers
    maxAge: 86400,            // cache preflight response for 24 hours
  });

  await app.listen(process.env.PORT ?? 3000);
}

bootstrap();

Dynamic origin validation (allow multiple preview URLs):

Preview environments generate unique URLs per PR. A static origin allowlist breaks every preview. Use a function to validate origins dynamically:

// main.ts — dynamic CORS origin for preview environments
app.enableCors({
  origin: (origin, callback) => {
    // Allow requests with no origin (curl, Postman, server-to-server)
    if (!origin) return callback(null, true);

    const allowedPatterns = [
      /^https:\/\/app\.example\.com$/,
      /^https:\/\/.*\.vercel\.app$/,       // all Vercel preview deployments
      /^https:\/\/.*\.onrender\.com$/,      // all Render preview deployments
      /^http:\/\/localhost:\d+$/,           // local development
    ];

    const allowed = allowedPatterns.some((pattern) => pattern.test(origin));

    if (allowed) {
      callback(null, true);
    } else {
      callback(new Error(`CORS policy: origin ${origin} is not allowed`));
    }
  },
  credentials: true,
  methods: ['GET', 'POST', 'PUT', 'PATCH', 'DELETE', 'OPTIONS'],
  allowedHeaders: ['Content-Type', 'Authorization'],
});

Per-route CORS override:

NestJS supports the @nestjs/common @Header() decorator for one-off overrides, but the idiomatic approach for per-route CORS is a custom decorator:

// Rarely needed — prefer global CORS config
import { Controller, Get, Res } from '@nestjs/common';
import { Response } from 'express';

@Controller('webhooks')
export class WebhooksController {
  @Get('stripe')
  handleStripe(@Res() res: Response) {
    // Stripe sends webhooks from their servers — no browser, no CORS.
    // If you need to handle a specific case:
    res.header('Access-Control-Allow-Origin', 'https://hooks.stripe.com');
    res.status(200).json({ received: true });
  }
}

Key Differences

IIS / Azure App ServiceRender
Certificates imported manually (PFX), renewed annuallyLet’s Encrypt, issued automatically, renewed automatically
Custom domains configured via portal UI, multiple stepsAdd domain name in dashboard, create one DNS record
HTTP redirect configured via URL Rewrite or middlewareHTTP-to-HTTPS redirect enforced at the load balancer
Client IP available in REMOTE_ADDRClient IP in X-Forwarded-For; requires trust proxy
TLS termination at the application server or ARRTLS termination at Render’s edge load balancer
CORS via middleware in Program.csCORS via app.enableCors() in main.ts
Wildcard CORS for previews requires custom middlewareDynamic origin function handles preview URLs cleanly

Gotchas for .NET Engineers

1. X-Forwarded-For must be trusted explicitly or rate limiting breaks. On Azure App Service with ARR, IP forwarding is configured at the infrastructure level. On Render, you must add app.getHttpAdapter().getInstance().set('trust proxy', 1) to main.ts. Without it, every request appears to come from the same internal Render IP address. Rate limiters keyed on client IP will throttle your entire user base after the first burst instead of per-user. This affects @nestjs/throttler, custom rate limiting, and any audit logging that records IPs.

2. CORS with credentials: true requires an explicit origin — wildcard is blocked by the spec. If you set credentials: true and origin: '*', browsers reject the response with a CORS error. The spec prohibits credentialed requests to wildcard origins. You must list specific origins or use the dynamic origin function shown above. This trips up engineers who set origin: '*' for initial development and then add cookies or Authorization headers later.

3. Let’s Encrypt certificates will not issue if DNS has not propagated. If you add a custom domain in Render immediately after creating the DNS record, Render may attempt certificate issuance before the record propagates and mark it failed. The solution: wait for DNS propagation (verify with dig api.example.com or nslookup api.example.com), then trigger re-verification from the Render dashboard. Render does retry automatically, but impatient engineers sometimes cycle through multiple attempts and get confused by cached negative results.

4. Render does not automatically configure subdomains for preview environments. Preview environments get .onrender.com URLs, not subdomains of your custom domain. If your frontend is hardcoded to https://api.example.com, it will not hit the preview API — it will hit production. The solution is to read the API URL from an environment variable (NEXT_PUBLIC_API_URL for Next.js) and set that variable to the preview environment’s .onrender.com URL in preview deployments.

5. The OPTIONS preflight request must return 204 or 200 — NestJS does this automatically but only if CORS is enabled. A common debugging trap: CORS works in Postman (no preflight), fails in the browser. Postman does not send OPTIONS preflights. If app.enableCors() is not called in main.ts, NestJS does not handle OPTIONS and returns 404. Always test CORS with an actual browser request, not Postman.


Hands-On Exercise

Set up a custom domain with automatic TLS and configure CORS for both production and preview environments.

Step 1: Add a custom domain to your Render service via the dashboard. Use a subdomain you control (api.yourname.example.com). Create the CNAME record at your DNS provider. Verify propagation with dig api.yourname.example.com until you see Render’s IP.

Step 2: Confirm TLS is issued by visiting https://api.yourname.example.com in a browser. Click the lock icon and inspect the certificate — confirm it is issued by Let’s Encrypt.

Step 3: Add trust proxy to your main.ts. Add a debug endpoint that returns req.ip and req.headers['x-forwarded-for']. Confirm the IP in the response matches your actual public IP (check against https://ifconfig.me).

Step 4: Add the dynamic CORS origin function from above, allowing *.vercel.app and *.onrender.com. Open a preview environment URL and make a credentialed API request from the browser console:

// Run in browser console from a Vercel preview URL
fetch('https://your-api.onrender.com/api/health', {
  credentials: 'include',
}).then(r => r.json()).then(console.log);

Confirm the request succeeds. Then temporarily remove *.onrender.com from the allow list and confirm it fails with a CORS error.


Quick Reference

# Add a custom domain to a Render service
render domains add api.example.com --service-id srv-xxxx

# List all custom domains on a service
render domains list --service-id srv-xxxx

# Check DNS propagation
dig api.example.com CNAME

# Check TLS certificate details
openssl s_client -connect api.example.com:443 -servername api.example.com </dev/null \
  | openssl x509 -noout -dates -issuer
// main.ts — production-ready NestJS setup
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);

  // Trust Render's load balancer
  app.getHttpAdapter().getInstance().set('trust proxy', 1);

  // CORS with dynamic origin for preview environments
  app.enableCors({
    origin: (origin, callback) => {
      if (!origin) return callback(null, true); // server-to-server
      const allowed = [
        /^https:\/\/yourdomain\.com$/,
        /^https:\/\/.*\.vercel\.app$/,
        /^https:\/\/.*\.onrender\.com$/,
        /^http:\/\/localhost:\d+$/,
      ];
      callback(
        allowed.some((p) => p.test(origin)) ? null : new Error('CORS blocked'),
        true
      );
    },
    credentials: true,
    methods: ['GET', 'POST', 'PUT', 'PATCH', 'DELETE', 'OPTIONS'],
    allowedHeaders: ['Content-Type', 'Authorization'],
    maxAge: 86400,
  });

  await app.listen(process.env.PORT ?? 3000);
}

bootstrap();

Further Reading

6.8 — Infrastructure as Code: ARM/Bicep vs. Pulumi/Terraform

For .NET engineers who know: ARM templates, Azure Bicep, and the Azure Resource Manager deployment model You’ll learn: Where Terraform and Pulumi fit in the broader IaC ecosystem, what the Render Blueprint spec (render.yaml) gives you for free, and when IaC is actually worth the investment at our scale Time: 10-15 min read


The .NET Way (What You Already Know)

ARM templates are JSON documents that describe Azure resources declaratively. The Azure Resource Manager evaluates the desired state, diffs it against the current state, and applies the delta. Bicep is a domain-specific language that compiles to ARM templates — it eliminates the JSON verbosity while keeping the same deployment model:

// Bicep — deploy an Azure App Service
param location string = 'eastus'
param appName string

resource appServicePlan 'Microsoft.Web/serverfarms@2022-03-01' = {
  name: '${appName}-plan'
  location: location
  sku: {
    name: 'B1'
    tier: 'Basic'
  }
}

resource webApp 'Microsoft.Web/sites@2022-03-01' = {
  name: appName
  location: location
  properties: {
    serverFarmId: appServicePlan.id
    httpsOnly: true
    siteConfig: {
      nodeVersion: '20-lts'
    }
  }
}

The Azure DevOps pipeline runs az deployment group create --template-file main.bicep, ARM diffs the desired vs. current state, and provisions or updates the resource group. The deployment is idempotent — running it twice produces the same result.

ARM/Bicep works well within the Azure ecosystem but has no concept of resources outside it. Deploying to Render, AWS, or any non-Azure service means switching tools entirely.


The Broader IaC Ecosystem

Terraform

Terraform (by HashiCorp, now open-source via OpenTofu after the license change) is the dominant IaC tool across cloud providers. It uses its own configuration language (HCL — HashiCorp Configuration Language) and supports every major cloud through a provider plugin system. The same Terraform project can provision an AWS RDS database, a Render web service, and a Cloudflare DNS record.

The mental model maps directly from Bicep:

Bicep ConceptTerraform Equivalent
Template file (.bicep)Configuration files (.tf)
Resource definitionresource block
Parametervariable block
Outputoutput block
Azure Resource ManagerTerraform state + provider
az deployment group createterraform apply
What-if deploymentterraform plan
Resource group scopeWorkspace / state file

A minimal Terraform example for a Render web service:

# main.tf — Terraform for Render
terraform {
  required_providers {
    render = {
      source  = "render-oss/render"
      version = "~> 1.0"
    }
  }
}

provider "render" {
  api_key = var.render_api_key
  owner_id = var.render_owner_id
}

variable "render_api_key" {
  type      = string
  sensitive = true
}

variable "render_owner_id" {
  type = string
}

# PostgreSQL database
resource "render_postgres" "db" {
  name    = "myapp-db"
  plan    = "starter"
  region  = "oregon"
  version = "15"
}

# Web service
resource "render_web_service" "api" {
  name   = "myapp-api"
  plan   = "starter"
  region = "oregon"

  runtime_source = {
    native_runtime = {
      auto_deploy    = true
      branch         = "main"
      build_command  = "pnpm install && pnpm build"
      build_filter   = { paths = ["apps/api/**", "packages/**"] }
      repo_url       = "https://github.com/your-org/your-repo"
      runtime        = "node"
      start_command  = "node dist/main.js"
    }
  }

  env_vars = {
    NODE_ENV = { value = "production" }
    DATABASE_URL = {
      value = render_postgres.db.connection_string
    }
  }
}

output "api_url" {
  value = render_web_service.api.service_details.url
}
# Terraform workflow
terraform init          # download providers
terraform plan          # show what will change
terraform apply         # apply changes
terraform destroy       # tear everything down

The state file (terraform.tfstate) records which real resources Terraform manages. Store it remotely (Terraform Cloud, S3 with DynamoDB locking) for team environments — never commit it to git, as it contains sensitive values.

Pulumi

Pulumi is infrastructure as code written in actual programming languages — TypeScript, Python, Go, C#. For a team coming from .NET, Pulumi’s C# support is a genuine draw. For our team specifically, the TypeScript support is the relevant one:

// index.ts — Pulumi TypeScript for Render + AWS
import * as render from '@pulumi/render';
import * as aws from '@pulumi/aws';

// PostgreSQL on Render
const db = new render.PostgresDatabase('myapp-db', {
  name: 'myapp-db',
  plan: 'starter',
  region: 'oregon',
  databaseVersion: 'POSTGRES_15',
});

// S3 bucket for file uploads (AWS)
const bucket = new aws.s3.Bucket('uploads', {
  acl: 'private',
  versioning: { enabled: false },
});

// Render web service
const api = new render.WebService('myapp-api', {
  name: 'myapp-api',
  plan: 'starter',
  region: 'oregon',
  runtimeSource: {
    nativeRuntime: {
      repoUrl: 'https://github.com/your-org/your-repo',
      branch: 'main',
      buildCommand: 'pnpm install && pnpm build',
      startCommand: 'node dist/main.js',
      runtime: 'node',
    },
  },
  envVars: [
    { key: 'DATABASE_URL', value: db.connectionString },
    { key: 'AWS_BUCKET', value: bucket.bucket },
    { key: 'NODE_ENV', value: 'production' },
  ],
});

export const apiUrl = api.serviceDetails.url;
export const bucketName = bucket.bucket;
# Pulumi workflow
pulumi login            # authenticate (Pulumi Cloud or self-hosted)
pulumi stack init dev   # create a stack (equivalent to environment)
pulumi up               # preview and apply changes
pulumi destroy          # tear down the stack

Pulumi advantages over Terraform for our team:

  • TypeScript is the native language — no HCL to learn
  • Full language features: loops, conditionals, abstractions, imports
  • Can import existing Pulumi component libraries shared across projects
  • Pulumi Cloud handles state, secrets, and audit history

Pulumi trade-offs:

  • Smaller community and provider ecosystem than Terraform
  • Pulumi Cloud has a free tier but costs money at scale
  • TypeScript compilation adds a step before deployments execute

Render Blueprint Spec (render.yaml)

For most teams at our scale, render.yaml is sufficient and Terraform/Pulumi is overkill. The Render Blueprint spec is a YAML file committed to your repository that defines all services, databases, environment variables, and cron jobs. Render reads it and provisions (or updates) resources to match.

# render.yaml — complete application definition
services:
  - type: web
    name: api
    runtime: node
    plan: starter
    region: oregon
    buildCommand: pnpm install && pnpm build
    startCommand: node dist/main.js
    healthCheckPath: /api/health
    autoDeploy: true
    branch: main
    scaling:
      minInstances: 1
      maxInstances: 3
      targetMemoryPercent: 70
    envVars:
      - key: NODE_ENV
        value: production
      - key: DATABASE_URL
        fromDatabase:
          name: postgres-db
          property: connectionString
      - key: REDIS_URL
        fromService:
          name: redis
          type: redis
          property: connectionString
      - key: STRIPE_SECRET_KEY
        sync: false     # must be set manually in dashboard

  - type: web
    name: frontend
    runtime: node
    plan: starter
    region: oregon
    buildCommand: pnpm install && pnpm build
    staticPublishPath: .next
    envVars:
      - key: NEXT_PUBLIC_API_URL
        fromService:
          name: api
          type: web
          property: host

  - type: cron
    name: cleanup-job
    runtime: node
    schedule: "0 2 * * *"       # 2am UTC daily
    buildCommand: pnpm install && pnpm build
    startCommand: node dist/jobs/cleanup.js

  - type: redis
    name: redis
    plan: starter
    region: oregon

databases:
  - name: postgres-db
    databaseName: myapp
    user: myapp
    plan: starter
    region: oregon
    previewPlan: starter

Blueprint commands:

# Validate render.yaml before pushing
render blueprint validate

# Deploy the blueprint (useful in CI)
render blueprint launch --yes

The key difference from Terraform: the Blueprint spec lives in your repository, is read by Render on every push, and Render manages the state. You do not manage a state file. You do not run plan/apply locally. The trade-off is that Render’s Blueprint has fewer capabilities than Terraform — it can only manage Render resources, and it cannot express complex conditionals or reuse abstractions across projects.


When You Need IaC vs. When the Dashboard Is Sufficient

At our current scale, this decision framework applies:

SituationRecommendation
Single team, one Render project, one regionrender.yaml Blueprint is sufficient
Need to provision AWS resources (S3, SES, CloudFront) alongside RenderAdd Terraform for the AWS pieces; keep render.yaml for Render
Multiple environments (dev, staging, prod) with environment-specific configrender.yaml with environment variable overrides per environment
Multi-region, multi-cloud, complex networkingTerraform or Pulumi — you have outgrown the dashboard
You want to version-control all infrastructure changes with PR reviewsrender.yaml for Render, Terraform for everything else
Onboarding new engineers — they need a reproducible environmentrender.yaml with seed scripts is sufficient
Disaster recovery — rebuild the entire stack from scratch in 30 minutesTerraform or Pulumi — the Blueprint can rebuild Render, but not DNS, CDN, S3

The honest answer for our team today: render.yaml covers 90% of what we need. Terraform is worth introducing when we add AWS services (SES for email, CloudFront for CDN, or RDS for a production database tier). Pulumi is worth evaluating if we find ourselves writing complex Terraform with a lot of conditional logic that would benefit from actual TypeScript.


Gotchas for .NET Engineers

1. Terraform state is not in the configuration files — it is a separate artifact you must manage. In ARM/Bicep, the ARM Resource Manager tracks state in Azure’s own database. In Terraform, state lives in a terraform.tfstate file. By default this file is local, which means it is lost if your machine is lost, and two engineers running terraform apply simultaneously will corrupt it. Always configure remote state from the start:

# backend.tf — remote state in S3 (or use Terraform Cloud)
terraform {
  backend "s3" {
    bucket         = "myorg-terraform-state"
    key            = "myapp/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-lock"  # prevents concurrent applies
    encrypt        = true
  }
}

Never commit terraform.tfstate to git. Add it to .gitignore immediately.

2. pulumi up is not idempotent in the same way terraform apply is — it diffs against Pulumi’s state, not live infrastructure. If someone creates a resource manually in the Render dashboard outside of Pulumi, Pulumi does not know about it. The next pulumi up will not see the manual resource. This is the same as Terraform’s state drift problem, but TypeScript engineers used to having full control over their code are sometimes surprised that the program is not re-evaluated from scratch each time.

3. Render Blueprint fromService references assume the service exists — circular dependencies will fail. If frontend references api’s host URL via fromService, and both are in the same Blueprint, Render must create api before frontend. Render handles this ordering automatically for simple cases, but circular references (service A needs service B’s URL, service B needs service A’s URL) will cause the Blueprint deployment to fail. Restructure to eliminate the circular dependency, typically by extracting the shared value into a separate environment variable set manually.

4. HCL (Terraform’s language) has a learning curve that looks easier than it is. HCL appears simple but has subtle rules around type coercion, the for_each meta-argument, and module variable scoping that are not obvious from reading examples. Engineers who assume it is “just JSON with loops” hit walls quickly when they try to build conditional logic or dynamic resource counts. Budget time to read the Terraform documentation properly rather than copying examples and adjusting values.

5. Destroying and recreating is not the same as updating. Terraform and Pulumi aim to update resources in-place when possible, but some resource properties are immutable — changing them forces destroy-and-recreate. For databases, this means data loss. Always run terraform plan and look for forces replacement annotations before applying any changes that touch database resources. The Render postgres resource’s plan and region fields are immutable — changing them destroys the database.


Hands-On Exercise

Create a render.yaml Blueprint spec for a realistic application and validate it locally.

Step 1: Create a render.yaml at the root of a repository. Define:

  • A NestJS API web service with healthCheckPath: /api/health
  • A Next.js frontend web service
  • A PostgreSQL database
  • A cron job that runs at midnight UTC
  • An environment variable on the API that reads the database connection string from the database definition

Step 2: Validate the Blueprint spec:

npm install -g @render-oss/render-cli
render login
render blueprint validate

Fix any validation errors the CLI reports.

Step 3: Add previewsEnabled: true and previewsExpireAfterDays: 3 to the API service. Add a previewPlan: starter to the database.

Step 4 (optional — awareness level): Install Terraform and create a main.tf that provisions one S3 bucket. Run terraform init, terraform plan, and terraform apply. Observe the state file created locally. Add terraform.tfstate and terraform.tfstate.backup to .gitignore.

Step 5 (optional — awareness level): Create a Pulumi TypeScript project (pulumi new typescript). Replace the default index.ts with code that creates one AWS S3 bucket. Run pulumi up and observe the diff. Run pulumi destroy.


Quick Reference

# Render Blueprint
render blueprint validate                    # validate render.yaml
render blueprint launch --yes               # deploy blueprint
render services list                         # list all services

# Terraform
terraform init                               # initialize (download providers)
terraform plan                               # show planned changes
terraform apply                              # apply changes
terraform apply -target=render_web_service.api  # apply one resource
terraform destroy                            # destroy all resources
terraform state list                         # list tracked resources
terraform state show render_web_service.api  # inspect one resource

# Pulumi
pulumi login                                 # authenticate
pulumi stack init dev                        # create environment stack
pulumi up                                    # preview and apply
pulumi preview                               # dry-run only
pulumi destroy                               # destroy stack
pulumi stack ls                              # list stacks
pulumi config set MY_VAR value              # set stack config variable
pulumi config set --secret MY_SECRET value  # set encrypted secret
# render.yaml — annotated minimal example
services:
  - type: web
    name: api
    runtime: node
    plan: starter          # starter | standard | pro
    region: oregon         # oregon | frankfurt | singapore | ohio
    branch: main
    buildCommand: pnpm install && pnpm build
    startCommand: node dist/main.js
    healthCheckPath: /api/health
    autoDeploy: true
    previewsEnabled: true
    previewsExpireAfterDays: 7
    envVars:
      - key: DATABASE_URL
        fromDatabase:
          name: db
          property: connectionString  # connectionString | host | port | database | user | password

databases:
  - name: db
    plan: starter
    region: oregon
    previewPlan: starter

Further Reading

6.9 — Local Development Environment Setup

For .NET engineers who know: Install Visual Studio, install the .NET SDK, clone the repo, press F5 You’ll learn: The Node.js ecosystem has more moving parts — Node version management, pnpm, shell config, and a setup script that gets a new engineer from zero to running in under 20 minutes Time: 15-20 min read


The .NET Way (What You Already Know)

Setting up a .NET project on a new machine is a well-understood procedure. Download the .NET SDK installer for the version your project targets, install Visual Studio or Rider, clone the repository, and open the solution. The IDE handles package restore on first build. If the project targets .NET 8, you install .NET 8. Two projects targeting different versions coexist cleanly — the SDK installer handles side-by-side installs. The only variable is whether the project requires SQL Server locally (Docker or LocalDB), but even that is well-documented.

The predictability is real and worth acknowledging before explaining why the Node.js equivalent requires more deliberate setup.


Why Node Setup Has More Moving Parts

Several factors combine to make Node.js environment setup less deterministic by default:

Node.js does not ship with a global version manager. The .NET SDK installer handles multiple side-by-side versions transparently. With Node.js, you install a version manager separately, and if you do not, you end up with a single system Node version that conflicts with every project requiring a different version.

The global vs. local package distinction matters more. In .NET, tools like the EF CLI are installed globally but versioned per-project via <PackageReference>. In Node.js, the boundary between global tools and project dependencies is blurrier, and installing tools globally in the wrong version causes subtle failures.

Shell configuration affects the toolchain. Version managers (nvm, fnm) inject themselves into your shell via .bashrc/.zshrc. If those files are not configured correctly, the version manager exists but does not activate on shell start, and engineers spend 20 minutes debugging why node -v returns the wrong version.

The solution is to make setup explicit, scripted, and reproducible — which is what this article covers.


Step 1: Node.js Version Management

Never install Node.js directly from nodejs.org for development work. Install a version manager and use it to install and switch Node versions.

fnm (Fast Node Manager) is written in Rust, loads faster than nvm, and handles .nvmrc and .node-version files automatically:

# macOS (Homebrew)
brew install fnm

# Windows (Winget)
winget install Schniz.fnm

# Linux (curl installer)
curl -fsSL https://fnm.vercel.app/install | bash

Add fnm to your shell in ~/.zshrc (macOS) or ~/.bashrc (Linux):

# ~/.zshrc
eval "$(fnm env --use-on-cd --shell zsh)"

The --use-on-cd flag tells fnm to automatically switch Node versions when you cd into a directory that has a .nvmrc or .node-version file. This is the equivalent of global.json for the .NET SDK — the version is declared in the repo, and the toolchain respects it automatically.

# Install and activate a specific Node version
fnm install 22          # install Node 22 LTS
fnm use 22              # activate it in current shell
fnm default 22          # make it the default for new shells

# Verify
node -v                 # v22.x.x
npm -v                  # 10.x.x

Option B: nvm (Widely used, slower)

nvm has the largest community and most documentation, but it is a shell script and adds ~70ms to every shell startup:

# Install nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash

# ~/.zshrc (added automatically by installer, verify it is there)
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"
nvm install 22
nvm use 22
nvm alias default 22

Declaring the Node version in the repository

Add a .nvmrc file (works with both fnm and nvm) to the repository root:

# .nvmrc
22

With fnm’s --use-on-cd flag, cd-ing into the project directory automatically switches to the correct Node version. New engineers who cd into the wrong directory and see the wrong node version will now get a useful error rather than a silent wrong-version build.


Step 2: pnpm Installation

Install pnpm after Node.js is set up. The canonical installer is Corepack, which ships with Node 18+ and manages package manager versions:

# Enable Corepack (ships with Node, just needs activation)
corepack enable

# Install the pnpm version specified in package.json
corepack prepare pnpm@latest --activate

Alternatively, install pnpm globally:

npm install -g pnpm@9

# Verify
pnpm -v    # 9.x.x

Pin the pnpm version in package.json so every engineer and CI uses the same version:

{
  "name": "myapp",
  "packageManager": "pnpm@9.15.0",
  "engines": {
    "node": ">=22.0.0",
    "pnpm": ">=9.0.0"
  }
}

The packageManager field tells Corepack which version to activate. The engines field tells pnpm to warn (or fail with engine-strict=true) if the installed versions do not match.


Step 3: VS Code Extensions for TypeScript / React / Vue

VS Code is the standard editor for this stack. These extensions are non-negotiable for productive TypeScript development:

Required:

ExtensionIDPurpose
ESLintdbaeumer.vscode-eslintLint errors inline as you type
Prettieresbenp.prettier-vscodeAuto-format on save
TypeScript Vue Plugin (Volar)Vue.volarVue 3 SFC support (replaces Vetur)
PrismaPrisma.prismaSchema syntax highlighting, format on save
GitLenseamodio.gitlensInline blame, PR annotations
Error Lensusernamehzq.error-lensInline error messages without hovering

Highly recommended:

ExtensionIDPurpose
Thunder Clientrangav.vscode-thunder-clientREST client embedded in VS Code
Dockerms-azuretools.vscode-dockerDocker Compose management
DotENVmikestead.dotenv.env file syntax highlighting
Import Costwix.vscode-import-costShows bundle size of each import inline
Tailwind CSS IntelliSensebradlc.vscode-tailwindcssAutocomplete for Tailwind classes

Install all at once:

code --install-extension dbaeumer.vscode-eslint
code --install-extension esbenp.prettier-vscode
code --install-extension Vue.volar
code --install-extension Prisma.prisma
code --install-extension eamodio.gitlens
code --install-extension usernamehzq.error-lens
code --install-extension rangav.vscode-thunder-client
code --install-extension ms-azuretools.vscode-docker
code --install-extension mikestead.dotenv
code --install-extension bradlc.vscode-tailwindcss

Workspace settings (.vscode/settings.json committed to the repo):

{
  "editor.defaultFormatter": "esbenp.prettier-vscode",
  "editor.formatOnSave": true,
  "editor.codeActionsOnSave": {
    "source.fixAll.eslint": "explicit"
  },
  "typescript.tsdk": "node_modules/typescript/lib",
  "typescript.enablePromptUseWorkspaceTsdk": true,
  "[vue]": {
    "editor.defaultFormatter": "Vue.volar"
  },
  "[prisma]": {
    "editor.defaultFormatter": "Prisma.prisma"
  },
  "eslint.validate": ["javascript", "javascriptreact", "typescript", "typescriptreact", "vue"],
  "files.exclude": {
    "**/.git": true,
    "**/node_modules": true,
    "**/.next": true,
    "**/dist": true
  },
  "search.exclude": {
    "**/node_modules": true,
    "**/.next": true,
    "**/dist": true,
    "pnpm-lock.yaml": true
  }
}

Committing .vscode/settings.json ensures every engineer in the repo gets the same formatter, linter, and TypeScript SDK configuration — equivalent to the .editorconfig + Roslyn analyzer settings in a .NET solution.


Step 4: Essential CLI Tools

# GitHub CLI — pull request and issue management
brew install gh

# Authenticate once
gh auth login

# Docker — local containers for PostgreSQL, Redis
brew install --cask docker
# Start Docker Desktop after install

# Render CLI
npm install -g @render-oss/render-cli
render login

# jq — JSON processing in terminal scripts
brew install jq

# httpie — friendlier alternative to curl for API testing
brew install httpie

Verify the GitHub CLI is authenticated and can reach your organization:

gh auth status
gh repo list your-org --limit 5

Step 5: Shell Configuration for Productivity

A well-configured shell reduces friction on daily tasks. This is not required for productivity but pays off quickly.

~/.zshrc additions:

# fnm — Node version manager (add after fnm install)
eval "$(fnm env --use-on-cd --shell zsh)"

# pnpm shortcuts
alias pi="pnpm install"
alias pd="pnpm dev"
alias pb="pnpm build"
alias pt="pnpm test"
alias ptw="pnpm test --watch"

# Git shortcuts
alias gs="git status"
alias gp="git pull --rebase"
alias gcm="git checkout main && git pull --rebase"

# Render CLI
alias rl="render logs --tail"      # tail logs for a service

# Quick project navigation (adjust paths to match your machine)
alias dev="cd ~/dev2"
alias app="cd ~/dev2/myapp"

# Show current Node version in the prompt (optional — zsh only)
# Add to PROMPT variable if you use a custom prompt

.editorconfig committed to the repository root:

# .editorconfig — respected by VS Code, WebStorm, and most editors
root = true

[*]
indent_style = space
indent_size = 2
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true

[*.md]
trim_trailing_whitespace = false

[Makefile]
indent_style = tab

[*.{yml,yaml}]
indent_size = 2

[*.json]
indent_size = 2

Setup Script

The following script installs everything on a fresh macOS machine. Run it once on a new machine or hand it to a new engineer on their first day:

#!/usr/bin/env bash
# setup.sh — Bootstrap a macOS development environment for the JS/TS stack
# Usage: bash setup.sh
# Safe to re-run — all steps are idempotent

set -e

echo "==> Checking for Homebrew..."
if ! command -v brew &>/dev/null; then
  echo "Installing Homebrew..."
  /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
fi

echo "==> Installing CLI tools via Homebrew..."
brew install fnm gh jq httpie

echo "==> Installing Docker Desktop..."
if ! command -v docker &>/dev/null; then
  brew install --cask docker
  echo "Docker Desktop installed. Launch it from Applications before continuing."
fi

echo "==> Configuring fnm in ~/.zshrc..."
ZSHRC="$HOME/.zshrc"
FNM_LINE='eval "$(fnm env --use-on-cd --shell zsh)"'
if ! grep -q "fnm env" "$ZSHRC" 2>/dev/null; then
  echo "" >> "$ZSHRC"
  echo "# fnm — Node version manager" >> "$ZSHRC"
  echo "$FNM_LINE" >> "$ZSHRC"
  echo "fnm configuration added to ~/.zshrc"
else
  echo "fnm already configured in ~/.zshrc"
fi

echo "==> Loading fnm in current shell..."
eval "$(fnm env --use-on-cd --shell bash)"

echo "==> Installing Node.js 22 LTS..."
fnm install 22
fnm default 22
fnm use 22
echo "Node version: $(node -v)"

echo "==> Enabling Corepack and installing pnpm..."
corepack enable
corepack prepare pnpm@latest --activate
echo "pnpm version: $(pnpm -v)"

echo "==> Installing Render CLI..."
npm install -g @render-oss/render-cli

echo "==> Installing VS Code extensions..."
EXTENSIONS=(
  "dbaeumer.vscode-eslint"
  "esbenp.prettier-vscode"
  "Vue.volar"
  "Prisma.prisma"
  "eamodio.gitlens"
  "usernamehzq.error-lens"
  "rangav.vscode-thunder-client"
  "ms-azuretools.vscode-docker"
  "mikestead.dotenv"
  "bradlc.vscode-tailwindcss"
)

if command -v code &>/dev/null; then
  for ext in "${EXTENSIONS[@]}"; do
    code --install-extension "$ext" --force
  done
  echo "VS Code extensions installed."
else
  echo "VS Code CLI (code) not found. Install VS Code and add it to PATH, then re-run."
fi

echo "==> Authenticating GitHub CLI..."
if ! gh auth status &>/dev/null; then
  gh auth login
else
  echo "GitHub CLI already authenticated."
fi

echo ""
echo "==> Setup complete. Next steps:"
echo "    1. Start Docker Desktop from Applications"
echo "    2. Run: render login"
echo "    3. Open a new terminal for fnm changes to take effect"
echo "    4. Clone your project: gh repo clone your-org/your-repo"
echo "    5. cd into the project and run: pnpm install"

First-Run Experience for a New Engineer

After running the setup script:

# Clone the repository
gh repo clone your-org/your-repo
cd your-repo

# fnm reads .nvmrc and switches Node automatically (--use-on-cd)
# Verify the correct version is active
node -v         # should match .nvmrc

# Install project dependencies
pnpm install

# Start the local database
docker compose up -d postgres redis

# Run migrations and seed
pnpm db:migrate
pnpm db:seed

# Start all services in development mode
pnpm dev

# In a separate terminal: run tests in watch mode
pnpm test --watch

The docker-compose.yml that supports local development:

# docker-compose.yml — local development services only
version: '3.9'

services:
  postgres:
    image: postgres:15-alpine
    ports:
      - "5432:5432"
    environment:
      POSTGRES_DB: myapp_dev
      POSTGRES_USER: myapp
      POSTGRES_PASSWORD: devpassword
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U myapp"]
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    volumes:
      - redis_data:/data

volumes:
  postgres_data:
  redis_data:

The .env.example that engineers copy to .env.local:

# .env.example — copy to .env.local and fill in values
# Never commit .env.local

# Database (matches docker-compose.yml defaults)
DATABASE_URL=postgresql://myapp:devpassword@localhost:5432/myapp_dev

# Redis
REDIS_URL=redis://localhost:6379

# Auth (Clerk — get from https://dashboard.clerk.com)
CLERK_SECRET_KEY=sk_test_your_key_here
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=pk_test_your_key_here

# Sentry (optional for local dev)
SENTRY_DSN=

# Stripe (sandbox keys for local dev)
STRIPE_SECRET_KEY=sk_test_your_key_here
NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=pk_test_your_key_here

Troubleshooting Common Environment Issues

node: command not found after installing fnm fnm changes to ~/.zshrc only take effect in new terminal windows. Open a new terminal and retry. If that fails, confirm the eval "$(fnm env...)" line is in ~/.zshrc and not ~/.bashrc (macOS Catalina+ uses zsh by default).

pnpm install fails with ENOENT: no such file or directory, open '.../pnpm-lock.yaml' The lockfile is not in the repository. Run pnpm install from the repository root (the directory containing package.json), not from a subdirectory.

pnpm install installs a different lockfile version from CI The packageManager field in package.json specifies the exact pnpm version. Run corepack use pnpm@X.Y.Z (matching the version in package.json) to align your local pnpm with CI.

prisma migrate dev fails with P1001: Can't reach database server The PostgreSQL container is not running. Run docker compose up -d postgres and wait for the health check to pass (docker compose ps shows healthy).

VS Code shows TypeScript errors that the CLI does not VS Code may be using its bundled TypeScript version instead of the project’s. Open the command palette (Cmd+Shift+P), run “TypeScript: Select TypeScript Version”, and choose “Use Workspace Version”. The workspace setting "typescript.tsdk": "node_modules/typescript/lib" in .vscode/settings.json sets this automatically, but it only activates if you accept the prompt.

ESLint shows no errors but pnpm lint finds issues The VS Code ESLint extension and the CLI use the same configuration, but the extension may be disabled for certain file types. Check the ESLint output panel (View > Output > ESLint) for errors. If the extension is not running on .ts files, add "typescript" to the eslint.validate workspace setting.

gh auth fails behind a corporate proxy Configure the GitHub CLI to use the proxy: export HTTPS_PROXY=http://proxy.company.com:8080. Add this to ~/.zshrc for persistence.

Port 5432 already in use Another PostgreSQL instance is running (locally installed Postgres or another Docker container). Stop it with brew services stop postgresql@15 or docker ps | grep postgres followed by docker stop <container-id>.


Gotchas for .NET Engineers

1. Node version switches are per-shell, not global, unless you set a default. Running fnm use 22 activates Node 22 in the current terminal only. If you open a new terminal without --use-on-cd or without the eval "$(fnm env...)" line in your shell config, you get the default version. Set the default explicitly with fnm default 22 after installing. This is unlike the .NET SDK, where installing a version makes it globally available immediately.

2. npm install and pnpm install do not do the same thing with lock files. If someone on the team runs npm install in a pnpm repository, npm creates a package-lock.json alongside the existing pnpm-lock.yaml, and the two lockfiles diverge. CI uses pnpm install --frozen-lockfile, which reads pnpm-lock.yaml. The engineer’s npm install effectively becomes invisible to everyone else. Add .npmrc to the repo root to prevent this:

# .npmrc
engine-strict=true

And add to package.json:

{
  "scripts": {
    "preinstall": "npx only-allow pnpm"
  }
}

This causes npm install and yarn install to fail with a clear message pointing to pnpm.

3. pnpm install --frozen-lockfile fails when package.json is changed without running pnpm install locally. CI runs with --frozen-lockfile to prevent accidental lockfile drift. If an engineer adds a dependency to package.json and pushes without running pnpm install locally to update pnpm-lock.yaml, CI fails. The fix is to always run pnpm install after editing package.json and commit both files together.


Quick Reference

# Node version management (fnm)
fnm install 22              # install Node 22
fnm use 22                  # use in current shell
fnm default 22              # set as default for new shells
fnm list                    # list installed versions
fnm list-remote             # list available versions to install
cat .nvmrc                  # see which version this project expects

# pnpm
pnpm install                # install all dependencies
pnpm install --frozen-lockfile  # CI-safe install (no lockfile changes)
pnpm add <pkg>              # add a runtime dependency
pnpm add -D <pkg>           # add a dev dependency
pnpm remove <pkg>           # remove a dependency
pnpm update                 # update all packages within semver ranges
pnpm exec prisma migrate dev  # run prisma CLI via pnpm

# Docker (local services)
docker compose up -d        # start all services in background
docker compose down         # stop and remove containers
docker compose logs -f      # tail all service logs
docker compose ps           # show container status

# GitHub CLI
gh repo clone org/repo      # clone repository
gh pr list                  # list open PRs
gh pr create                # create a PR interactively
gh pr checkout 42           # check out PR #42 locally
gh issue list               # list open issues

# VS Code extensions (install from CLI)
code --install-extension <extension-id>
code --list-extensions      # list all installed extensions

Further Reading

6.10 — Performance Monitoring & APM

For .NET engineers who know: Application Insights SDK, Live Metrics Stream, distributed tracing with W3C TraceContext, and Azure Monitor alerts You’ll learn: How Sentry covers both error tracking and performance monitoring for our stack, and where it maps to (and diverges from) Application Insights Time: 10-15 min read


The .NET Way (What You Already Know)

Application Insights is Microsoft’s APM solution. The Microsoft.ApplicationInsights.AspNetCore NuGet package instruments an ASP.NET Core application automatically — requests, dependencies (SQL, HTTP, Redis), exceptions, and custom events flow to Azure Monitor with minimal configuration:

// Program.cs — Application Insights in ASP.NET Core
builder.Services.AddApplicationInsightsTelemetry(options =>
{
    options.ConnectionString = builder.Configuration["ApplicationInsights:ConnectionString"];
    options.EnableAdaptiveSampling = true;       // sample at high throughput
    options.EnableQuickPulseMetricStream = true; // Live Metrics Stream
});

Application Insights gives you: a transaction search (find a specific request by ID), an application map showing service dependencies, distributed traces spanning multiple services via W3C TraceContext headers, performance metrics per endpoint (P50/P75/P95 response times), and SQL query durations attached to the traces that generated them. Alerting lives in Azure Monitor.

The core data model: a request is the root trace. Attached to it are dependencies (outbound SQL, HTTP, Redis calls) and exceptions. Custom telemetry (TelemetryClient.TrackEvent, TrackMetric) enriches the trace. All of it rolls up to a single trace ID you can use to reconstruct the full request path across services.

Our stack replaces Application Insights with Sentry. The data model is similar; the implementation differs.


Sentry vs. Application Insights — Conceptual Map

Application InsightsSentry Equivalent
Requests (server-side)Transactions / Spans
Dependencies (SQL, HTTP, Redis)Child spans within a transaction
Exceptions / TelemetryClient.TrackExceptionIssues (errors + stack traces)
Custom events (TrackEvent)Custom spans or breadcrumbs
Custom metrics (TrackMetric)Measurements attached to spans
Live Metrics StreamSentry’s real-time event stream
Application MapSentry’s Trace Explorer (partial equivalent)
Sampling ratetracesSampleRate
Smart detection / anomaly alertsSentry Alerts (metric-based)
Azure Monitor dashboardsSentry Performance dashboard
Distributed tracing (W3C)Sentry distributed tracing (W3C-compatible)
Connection stringDSN (Data Source Name)

Where Application Insights is deeply integrated with Azure (RBAC, Log Analytics, KQL queries), Sentry is cloud-agnostic and works the same on Render, Vercel, or self-hosted infrastructure.


Setting Up Sentry Performance Tracing

NestJS (Backend)

pnpm add @sentry/node @sentry/profiling-node

Initialize Sentry at the very top of main.ts, before any other imports:

// main.ts — Sentry must be initialized before other imports
import * as Sentry from '@sentry/node';
import { nodeProfilingIntegration } from '@sentry/profiling-node';

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  environment: process.env.NODE_ENV ?? 'development',
  release: process.env.SENTRY_RELEASE ?? 'local',

  // Performance tracing
  tracesSampleRate: process.env.NODE_ENV === 'production' ? 0.1 : 1.0,
  // 10% of requests traced in production; 100% in dev/staging

  // Profiling (CPU profiling attached to traces)
  profilesSampleRate: 0.1,  // profile 10% of sampled transactions

  integrations: [
    nodeProfilingIntegration(),
    // Automatically instruments HTTP, PostgreSQL, Redis, etc.
    Sentry.prismaIntegration(),
  ],

  // Filter out health check endpoints from tracing
  tracesSampler: (samplingContext) => {
    const url = samplingContext.request?.url ?? '';
    if (url.includes('/health') || url.includes('/metrics')) {
      return 0; // never trace health checks
    }
    return process.env.NODE_ENV === 'production' ? 0.1 : 1.0;
  },
});

// Now import and initialize NestJS
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);

  // Sentry request handler — must be before routes
  app.use(Sentry.Handlers.requestHandler());
  app.use(Sentry.Handlers.tracingHandler());

  await app.listen(process.env.PORT ?? 3000);
}

bootstrap();

Wire the Sentry exception handler in the NestJS exception filter to capture unhandled exceptions with full context:

// sentry-exception.filter.ts
import {
  ArgumentsHost,
  Catch,
  ExceptionFilter,
  HttpException,
  HttpStatus,
} from '@nestjs/common';
import * as Sentry from '@sentry/node';
import { Request, Response } from 'express';

@Catch()
export class SentryExceptionFilter implements ExceptionFilter {
  catch(exception: unknown, host: ArgumentsHost) {
    const ctx = host.switchToHttp();
    const req = ctx.getRequest<Request>();
    const res = ctx.getResponse<Response>();

    const status =
      exception instanceof HttpException
        ? exception.getStatus()
        : HttpStatus.INTERNAL_SERVER_ERROR;

    // Only send 5xx errors to Sentry — 4xx are expected client errors
    if (status >= 500) {
      Sentry.withScope((scope) => {
        scope.setTag('endpoint', `${req.method} ${req.route?.path ?? req.path}`);
        scope.setUser({ id: (req as any).user?.id });
        scope.setContext('request', {
          method: req.method,
          url: req.url,
          params: req.params,
          query: req.query,
          // never log request body — may contain credentials
        });
        Sentry.captureException(exception);
      });
    }

    res.status(status).json({
      statusCode: status,
      message: exception instanceof HttpException
        ? exception.message
        : 'Internal server error',
    });
  }
}

Register the filter globally:

// main.ts (continued)
import { SentryExceptionFilter } from './filters/sentry-exception.filter';

app.useGlobalFilters(new SentryExceptionFilter());

Custom Performance Spans

To instrument a specific business operation (equivalent to TelemetryClient.StartOperation in Application Insights):

// product.service.ts — custom span for a slow operation
import * as Sentry from '@sentry/node';

@Injectable()
export class ProductService {
  async importProducts(filePath: string): Promise<number> {
    return Sentry.startSpan(
      {
        name: 'product.import',
        op: 'task',
        attributes: {
          'file.path': filePath,
          'import.type': 'csv',
        },
      },
      async (span) => {
        const rows = await this.parseCsv(filePath);
        span.setAttribute('import.row_count', rows.length);

        const inserted = await this.bulkInsert(rows);
        span.setAttribute('import.inserted_count', inserted);

        return inserted;
      }
    );
  }
}

The span appears nested under the parent HTTP request trace in Sentry’s Trace Explorer.

Prisma Query Performance

The Sentry.prismaIntegration() initialization (included above) automatically captures all Prisma queries as spans attached to the active transaction. No additional configuration needed. Each span shows the query duration, the SQL model (not the full query by default — to avoid logging sensitive data), and whether it was a slow query.

To see which queries are slow: Sentry Performance dashboard > select any transaction > expand the span waterfall > look for Prisma spans with duration > 100ms.


Next.js (Frontend)

pnpm add @sentry/nextjs

Run the wizard — it generates the configuration files:

npx @sentry/wizard@latest -i nextjs

The wizard creates sentry.client.config.ts, sentry.server.config.ts, and sentry.edge.config.ts. Edit the client config to add performance tracing:

// sentry.client.config.ts
import * as Sentry from '@sentry/nextjs';

Sentry.init({
  dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
  environment: process.env.NODE_ENV ?? 'development',
  release: process.env.SENTRY_RELEASE,

  // Performance tracing
  tracesSampleRate: process.env.NODE_ENV === 'production' ? 0.1 : 1.0,

  // Core Web Vitals
  // Sentry automatically captures CWV with the BrowserTracing integration
  integrations: [
    Sentry.browserTracingIntegration({
      // Trace outbound requests to your API
      tracePropagationTargets: [
        'localhost',
        /^https:\/\/api\.example\.com/,
      ],
    }),
    Sentry.replayIntegration({
      // Session replay — 10% of sessions, 100% of sessions with errors
      maskAllText: true,         // GDPR: mask text by default
      blockAllMedia: true,       // block images in replay
    }),
  ],

  replaysSessionSampleRate: 0.1,
  replaysOnErrorSampleRate: 1.0,
});

The tracePropagationTargets list determines which outbound requests get Sentry trace headers attached — enabling distributed tracing between the Next.js frontend and NestJS backend. This is the equivalent of Application Insights’ automatic correlation via Request-Id and traceparent headers.


Interpreting the Sentry Performance Dashboard

After a few hours of production traffic, navigate to your Sentry project > Performance.

Transactions view: Sentry groups requests by route pattern (e.g., GET /api/products/:id, POST /api/orders). For each transaction you see:

  • P50 / P75 / P95 / P99 response times (equivalent to Application Insights’ percentile charts)
  • Throughput (requests per minute)
  • Failure rate (5xx responses as percentage)

Sort by P95 descending to find your slowest endpoints. A P95 of 2000ms means 1 in 20 requests takes over 2 seconds — that is where to start.

Trace Explorer (transaction detail): Click any transaction to see the span waterfall. This is the equivalent of Application Insights’ end-to-end transaction detail:

GET /api/orders/:id                             342ms
├── middleware: auth                              12ms
├── OrdersService.findOne                        285ms
│   ├── prisma:query SELECT orders WHERE id=...  240ms  ← slow
│   └── prisma:query SELECT items WHERE order... 30ms
└── serialize response                            8ms

The 240ms Prisma query is the bottleneck. Click it to see the model name, operation, and (if you enabled sendDefaultPii) the full query.

Issues vs. Performance: Sentry has two main sections that you will use daily:

  • Issues: Error tracking — grouped by stack trace fingerprint, equivalent to Application Insights’ Failures blade
  • Performance: APM — transactions, spans, Core Web Vitals, equivalent to Application Insights’ Performance blade

Core Web Vitals Monitoring

Sentry’s browserTracingIntegration automatically captures Core Web Vitals from real user sessions:

MetricWhat It MeasuresGood Threshold
LCP (Largest Contentful Paint)Load performance — when the largest visible element renders< 2.5s
INP (Interaction to Next Paint)Responsiveness — time from user interaction to visual update< 200ms
CLS (Cumulative Layout Shift)Visual stability — how much the layout shifts during load< 0.1
FCP (First Contentful Paint)Time to first visible content< 1.8s
TTFB (Time to First Byte)Server response time< 800ms

In Sentry: Performance > Web Vitals tab. This shows the distribution of each metric across real user sessions segmented by page route. Poor LCP on /products but not /dashboard tells you the product page has a specific performance problem (large images, slow data fetch, render-blocking scripts).


Performance Budgets and Alerting

Setting Up Sentry Alerts

In Sentry: Alerts > Create Alert > Performance.

Useful alert configurations for a production API:

Alert: P95 response time > 2000ms
Transaction: GET /api/*
Environment: production
Trigger: when P95 > 2,000ms for 5 minutes
Action: notify Slack #alerts channel
Alert: Error rate > 5%
Environment: production
Trigger: when error rate > 5% for 2 minutes
Action: notify Slack #alerts channel + page on-call
Alert: Apdex score < 0.8
(Apdex measures user satisfaction; 1.0 = all requests within threshold)
Trigger: when Apdex < 0.8 for 10 minutes
Action: notify Slack #alerts

Performance Budgets

Define acceptable thresholds and fail CI when they are violated using the Sentry CLI and performance budget assertions. A simpler approach is Lighthouse CI for frontend budgets:

# Install Lighthouse CI
pnpm add -D @lhci/cli
// lighthouserc.json
{
  "ci": {
    "collect": {
      "url": ["http://localhost:3000", "http://localhost:3000/products"],
      "numberOfRuns": 3
    },
    "assert": {
      "assertions": {
        "categories:performance": ["warn", { "minScore": 0.8 }],
        "first-contentful-paint": ["warn", { "maxNumericValue": 2000 }],
        "largest-contentful-paint": ["error", { "maxNumericValue": 4000 }],
        "cumulative-layout-shift": ["error", { "maxNumericValue": 0.1 }],
        "total-blocking-time": ["warn", { "maxNumericValue": 600 }]
      }
    }
  }
}
# .github/workflows/ci.yml (excerpt)
- name: Build Next.js
  run: pnpm build

- name: Start Next.js
  run: pnpm start &
  env:
    PORT: 3000

- name: Run Lighthouse CI
  run: pnpm lhci autorun

Identifying Slow Endpoints and DB Queries

A practical workflow for finding and fixing a performance problem:

Step 1: Open Sentry Performance, sort transactions by P95 descending. Find the endpoint with the worst P95. Check whether it is consistently slow (all-day problem) or spiky (load-related or a specific query plan regression).

Step 2: Click through to the transaction detail. Examine the span waterfall. Look for:

  • Prisma spans with duration > 100ms
  • N+1 patterns: repeated identical queries (many short spans of the same Prisma query)
  • Missing spans: time unaccounted for between spans (usually CPU-bound work)

Step 3: Identify the query from the Prisma span. The span name shows the model and operation (prisma:query products findMany). In your application code, find the prisma.product.findMany() call. Check whether it is missing an index, returning more columns than needed, or triggering N+1 fetches.

N+1 example — the most common Prisma performance problem:

// BAD — N+1: one query for orders, N queries for each order's user
const orders = await prisma.order.findMany();
const ordersWithUser = await Promise.all(
  orders.map(async (order) => ({
    ...order,
    user: await prisma.user.findUnique({ where: { id: order.userId } }),
  }))
);

// GOOD — single query with join
const orders = await prisma.order.findMany({
  include: { user: true },  // Prisma generates a JOIN
});

Step 4: Add a database index if missing. A missing index on a foreign key or filter column is the single most common cause of slow queries:

// schema.prisma — add index for common filter patterns
model Order {
  id        String   @id @default(cuid())
  userId    String
  status    String
  createdAt DateTime @default(now())

  user User @relation(fields: [userId], references: [id])

  @@index([userId])              // filter orders by user
  @@index([status, createdAt])   // filter by status, sort by date
}

After adding the index and running prisma migrate dev, deploy and monitor Sentry. The Prisma span duration for that endpoint should drop.


Key Differences from Application Insights

Application InsightsSentry
Tightly coupled to Azure (RBAC, Log Analytics, Monitor)Cloud-agnostic
KQL for querying telemetryUI-driven search + Sentry Query Language (SQL-like)
Smart detection of anomalies (built-in ML)Alert rules based on thresholds
Application Map (visual service graph)Trace Explorer (per-transaction waterfall)
Live Metrics Stream (real-time telemetry)Real-time event stream in Issues view
Source maps uploaded via CLI or CI pluginSource maps uploaded via Sentry webpack/turbo plugin
Session analytics from App Insights JS SDKSession Replay (video-like replay of user sessions)
Adaptive sampling via AIConfigurable tracesSampleRate + tracesSampler function
Performance blades in Azure PortalUnified Performance tab in Sentry

The practical difference for daily use: Application Insights requires familiarity with KQL to answer non-obvious questions. Sentry’s UI surfaces the most important information (slowest endpoints, worst error rates, failing releases) without writing queries. For ad-hoc investigation you will occasionally miss KQL, but for day-to-day monitoring Sentry’s defaults are faster.


Gotchas for .NET Engineers

1. Sentry.init() must be the very first code that runs — imports included. In Node.js, import statements are hoisted and executed before any runtime code in the file. If you import NestJS or Prisma before calling Sentry.init(), Sentry cannot instrument those modules. The @sentry/node package uses module patching — it wraps the exported functions of pg, ioredis, and other modules to inject span creation. If Prisma is already imported when Sentry initializes, the patching has no effect and you see no database spans. Structure main.ts so that Sentry.init() precedes all other imports, or use a separate instrument.ts entry point:

// instrument.ts — initialize Sentry, nothing else
import * as Sentry from '@sentry/node';

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  tracesSampleRate: 0.1,
});
// package.json — load instrument.ts before the app
{
  "scripts": {
    "start": "node --require ./dist/instrument.js dist/main.js"
  }
}

2. tracesSampleRate: 1.0 in production will saturate your Sentry quota and degrade performance. Application Insights’ adaptive sampling adjusts automatically. Sentry’s tracesSampleRate is a fixed fraction. At 1.0, every request is traced. For a service handling 1000 req/min, that is 60,000 traces per hour. Most Sentry plans have monthly event limits — you can exhaust the quota in hours and lose all tracing for the rest of the billing period. Start with 0.1 (10%) in production and adjust based on your volume and Sentry plan limits.

3. Distributed tracing between Next.js and NestJS requires CORS to allow Sentry trace headers. Sentry injects sentry-trace and baggage headers into outbound fetch calls from the browser. Your NestJS CORS configuration must allowlist these headers:

app.enableCors({
  allowedHeaders: ['Content-Type', 'Authorization', 'sentry-trace', 'baggage'],
  // ... other options
});

Without this, the browser’s preflight OPTIONS request succeeds but the actual request fails CORS validation because sentry-trace is not in the allowlist. The result: you see frontend transactions in Sentry but they do not link to backend transactions — the distributed trace chain is broken.

4. Source maps must be uploaded to Sentry or stack traces in production will be minified and unreadable. Application Insights stack traces are readable because .NET PDBs are deployed with the application. JavaScript production builds are minified — product.service.ts:42 becomes t.js:1:18432. Without uploaded source maps, Sentry error stack traces are useless for debugging. Upload source maps as part of your CI/CD pipeline:

# Install Sentry CLI
pnpm add -D @sentry/cli

# Upload source maps after build
sentry-cli releases files "$SENTRY_RELEASE" upload-sourcemaps ./dist \
  --url-prefix '~/' \
  --rewrite

Or use the @sentry/webpack-plugin / Next.js plugin which handles this automatically.

5. Sentry’s replaysIntegration can capture sensitive user data if not configured carefully. The Session Replay feature records real user interactions. With default settings, it captures all text content and form inputs. Enable maskAllText: true and blockAllMedia: true in replaysIntegration to comply with GDPR and prevent credential leakage in replays. Add data-sentry-mask attributes to specific elements that contain PII, or data-sentry-unmask to safe elements you want to keep readable.


Hands-On Exercise

Instrument a NestJS + Next.js application with Sentry end-to-end performance tracing.

Step 1: Create a Sentry project (or use an existing one) and obtain the DSN. Add SENTRY_DSN to your .env.local.

Step 2: Install @sentry/node in the NestJS app. Add the Sentry.init() call as the very first code in main.ts. Set tracesSampleRate: 1.0 for local development.

Step 3: Add Sentry.prismaIntegration() to the integrations array. Make a few API calls through the NestJS API (use Thunder Client or httpie).

Step 4: Open Sentry > Performance > Transactions. Confirm your NestJS transactions appear. Click one transaction and examine the span waterfall. Confirm Prisma spans are visible with query durations.

Step 5: Add a SentryExceptionFilter to NestJS. Trigger an intentional error (add a route that throws new Error('test error')). Confirm the error appears in Sentry Issues with a readable stack trace.

Step 6: Install @sentry/nextjs in the Next.js app and run the wizard. Add sentry-trace and baggage to the NestJS CORS allowedHeaders. Make a fetch call from the Next.js frontend to the NestJS backend. In Sentry, find the frontend transaction and verify it links to the corresponding backend transaction (the distributed trace).

Step 7: Set up one Sentry alert: P95 > 1000ms for any transaction in the development environment. Trigger it by adding await new Promise(r => setTimeout(r, 1100)) to a route and calling it. Verify the alert fires within the configured window.


Quick Reference

# Sentry CLI
sentry-cli login
sentry-cli releases new "$VERSION"
sentry-cli releases files "$VERSION" upload-sourcemaps ./dist
sentry-cli releases finalize "$VERSION"
sentry-cli releases deploys "$VERSION" new -e production

# Test that Sentry is receiving events
npx @sentry/wizard@latest -i nextjs    # Next.js setup wizard
// Minimal NestJS Sentry setup (main.ts)
import * as Sentry from '@sentry/node';
Sentry.init({
  dsn: process.env.SENTRY_DSN,
  environment: process.env.NODE_ENV,
  tracesSampleRate: process.env.NODE_ENV === 'production' ? 0.1 : 1.0,
  integrations: [Sentry.prismaIntegration()],
});

// Custom span
const result = await Sentry.startSpan(
  { name: 'my-operation', op: 'task' },
  async () => {
    return await doWork();
  }
);

// Capture error manually
Sentry.captureException(error);

// Add context to current scope
Sentry.setUser({ id: userId, email: userEmail });
Sentry.setTag('feature_flag', 'experiment-a');
Sentry.addBreadcrumb({
  message: 'User clicked checkout',
  category: 'ui',
  level: 'info',
});
// CORS allowedHeaders must include Sentry trace headers
app.enableCors({
  allowedHeaders: [
    'Content-Type',
    'Authorization',
    'sentry-trace',  // required for distributed tracing
    'baggage',       // required for distributed tracing
  ],
  credentials: true,
});

Further Reading

Sentry: Error Tracking and Monitoring

For .NET engineers who know: Application Insights — SDK telemetry, structured logging, dependency tracking, Live Metrics, and the Azure portal investigation workflow You’ll learn: How Sentry covers the error-tracking half of Application Insights with a sharper focus on developer experience — source maps, release tracking, and issue grouping — and how to configure it across our NestJS, Next.js, and Vue layers Time: 15-20 min read

The .NET Way (What You Already Know)

Application Insights is Microsoft’s unified observability platform. You call builder.Services.AddApplicationInsightsTelemetry(), set APPLICATIONINSIGHTS_CONNECTION_STRING, and it automatically captures exceptions, request telemetry, dependency calls (SQL, HTTP, queues), and custom events. The SDK hooks into the CLR at a deep level — it patches HttpClient, SqlClient, and the ASP.NET Core middleware pipeline without you touching exception handling code.

When something breaks in production, your workflow is: Azure portal, Application Insights, Failures blade, find the exception, read the stack trace, correlate with the operation ID to see what happened before and after. Alerts are configured in the portal.

// Program.cs
builder.Services.AddApplicationInsightsTelemetry();

// Custom telemetry anywhere in the app
public class OrderService
{
    private readonly TelemetryClient _telemetry;

    public OrderService(TelemetryClient telemetry)
    {
        _telemetry = telemetry;
    }

    public async Task<Order> PlaceOrderAsync(CreateOrderDto dto)
    {
        using var operation = _telemetry.StartOperation<RequestTelemetry>("PlaceOrder");
        try
        {
            var order = await _repository.CreateAsync(dto);
            _telemetry.TrackEvent("OrderPlaced", new Dictionary<string, string>
            {
                ["orderId"] = order.Id.ToString(),
                ["customerId"] = dto.CustomerId.ToString(),
            });
            return order;
        }
        catch (Exception ex)
        {
            _telemetry.TrackException(ex);
            throw;
        }
    }
}

Application Insights is metrics-first. It wants to show you dashboards, availability tests, and performance trends. If you just want to know “what broke, where, and why,” you end up navigating three blades to get to a readable stack trace.

The Sentry Way

Sentry is error-first. Its primary mental model is: an error happened, here is the complete context to understand and fix it — stack trace, user, browser/OS, request data, and a breadcrumb trail of what happened in the seconds before the error. Metrics and performance monitoring exist, but the product is built around the exception.

The other meaningful difference: Sentry is language-agnostic and installed as an npm package, not as a platform SDK. You configure it once per layer (frontend, backend) and the same Sentry project dashboard shows errors from all layers.

Installing the SDK

Our stack uses three layers, each with its own SDK package:

# NestJS backend
pnpm add @sentry/node @sentry/profiling-node

# Next.js frontend (use the Next.js-specific package)
pnpm add @sentry/nextjs

# Vue 3 frontend
pnpm add @sentry/vue

NestJS Integration

Sentry must be initialized before any other import in your application. This is the one setup rule that causes the most issues (covered in Gotchas).

// src/instrument.ts — MUST be imported before everything else
import * as Sentry from '@sentry/node';
import { nodeProfilingIntegration } from '@sentry/profiling-node';

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  environment: process.env.NODE_ENV,           // 'development' | 'staging' | 'production'
  release: process.env.SENTRY_RELEASE,         // e.g. 'api@1.4.2' — set by CI
  tracesSampleRate: process.env.NODE_ENV === 'production' ? 0.1 : 1.0,
  profilesSampleRate: 1.0,                      // Profile 100% of sampled transactions
  integrations: [
    nodeProfilingIntegration(),
  ],
  // Filter out noise before it reaches Sentry
  beforeSend(event, hint) {
    const error = hint.originalException;
    // Don't send 404s and validation errors — those are expected
    if (error instanceof Error && error.name === 'NotFoundException') {
      return null;
    }
    return event;
  },
});
// src/main.ts — import instrument.ts FIRST, before NestFactory
import './instrument';   // ← This line must come before any other import
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import * as Sentry from '@sentry/node';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  app.setGlobalPrefix('api');

  // Sentry's request handler middleware — captures request context with each error
  app.use(Sentry.Handlers.requestHandler());

  // Must come BEFORE error handling middleware
  app.use(Sentry.Handlers.tracingHandler());

  await app.listen(3000);
}
bootstrap();

For NestJS, the cleanest way to capture unhandled exceptions and attach them to Sentry is a global exception filter:

// src/common/filters/sentry-exception.filter.ts
import {
  ExceptionFilter,
  Catch,
  ArgumentsHost,
  HttpException,
  HttpStatus,
} from '@nestjs/common';
import * as Sentry from '@sentry/node';
import { Request, Response } from 'express';

@Catch()
export class SentryExceptionFilter implements ExceptionFilter {
  catch(exception: unknown, host: ArgumentsHost) {
    const ctx = host.switchToHttp();
    const response = ctx.getResponse<Response>();
    const request = ctx.getRequest<Request>();

    const status =
      exception instanceof HttpException
        ? exception.getStatus()
        : HttpStatus.INTERNAL_SERVER_ERROR;

    // Only send 500-level errors to Sentry — 4xx are expected application behavior
    if (status >= 500) {
      Sentry.captureException(exception, {
        extra: {
          url: request.url,
          method: request.method,
          body: request.body,
        },
      });
    }

    response.status(status).json({
      statusCode: status,
      timestamp: new Date().toISOString(),
      path: request.url,
    });
  }
}
// src/main.ts — register the filter globally
import { SentryExceptionFilter } from './common/filters/sentry-exception.filter';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  app.useGlobalFilters(new SentryExceptionFilter());
  // ...
}

Next.js Integration

The @sentry/nextjs package ships a wizard that does most of the configuration for you, but it’s worth understanding what it produces:

# Run the wizard — it modifies next.config.js and creates sentry.*.config.ts files
npx @sentry/wizard@latest -i nextjs

The wizard creates:

  • sentry.client.config.ts — client-side initialization (browser)
  • sentry.server.config.ts — server-side initialization (Node.js runtime)
  • sentry.edge.config.ts — Edge runtime initialization (middleware, edge routes)
// sentry.client.config.ts
import * as Sentry from '@sentry/nextjs';

Sentry.init({
  dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
  environment: process.env.NODE_ENV,
  release: process.env.NEXT_PUBLIC_SENTRY_RELEASE,

  // Performance: capture 10% of transactions in production
  tracesSampleRate: process.env.NODE_ENV === 'production' ? 0.1 : 1.0,

  // Session replay: capture 10% of sessions, 100% of sessions with errors
  replaysSessionSampleRate: 0.1,
  replaysOnErrorSampleRate: 1.0,

  integrations: [
    Sentry.replayIntegration(),
  ],

  // Don't send errors caused by browser extensions or network issues
  ignoreErrors: [
    'ResizeObserver loop limit exceeded',
    'ChunkLoadError',           // Next.js chunk loading on route change
    /^Network request failed/,
  ],
});
// sentry.server.config.ts
import * as Sentry from '@sentry/nextjs';

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  environment: process.env.NODE_ENV,
  release: process.env.SENTRY_RELEASE,
  tracesSampleRate: process.env.NODE_ENV === 'production' ? 0.1 : 1.0,
});

Vue 3 Integration

// src/main.ts
import { createApp } from 'vue';
import * as Sentry from '@sentry/vue';
import App from './App.vue';
import router from './router';

const app = createApp(App);

Sentry.init({
  app,
  dsn: import.meta.env.VITE_SENTRY_DSN,
  environment: import.meta.env.MODE,
  release: import.meta.env.VITE_SENTRY_RELEASE,
  integrations: [
    Sentry.browserTracingIntegration({ router }),  // Tracks route changes as transactions
    Sentry.replayIntegration(),
  ],
  tracesSampleRate: import.meta.env.PROD ? 0.1 : 1.0,
  replaysSessionSampleRate: 0.1,
  replaysOnErrorSampleRate: 1.0,
  tracePropagationTargets: [
    'localhost',
    /^https:\/\/api\.yourapp\.com/,  // Attach trace headers to API calls
  ],
});

app.use(router);
app.mount('#app');

Source Maps: Making Stack Traces Readable

Without source maps, your production stack traces look like this:

TypeError: Cannot read property 'id' of undefined
    at e (main.a3f8c9.js:1:2847)
    at t (main.a3f8c9.js:1:14203)

With source maps uploaded to Sentry, the same error looks like:

TypeError: Cannot read property 'id' of undefined
    at OrderService.findOne (src/orders/orders.service.ts:34:18)
    at OrdersController.getOrder (src/orders/orders.controller.ts:22:30)

Configure source map upload as part of your build:

# Install the Sentry CLI
pnpm add -D @sentry/cli
// vite.config.ts — for Vue/Vite projects
import { defineConfig } from 'vite';
import { sentryVitePlugin } from '@sentry/vite-plugin';

export default defineConfig({
  build: {
    sourcemap: true,  // Generate source maps
  },
  plugins: [
    sentryVitePlugin({
      org: 'your-org-slug',
      project: 'your-project-slug',
      authToken: process.env.SENTRY_AUTH_TOKEN,
      release: {
        name: process.env.SENTRY_RELEASE,
      },
      // Delete source maps from the server after upload
      // (they're only needed by Sentry, not end users)
      sourcemaps: {
        filesToDeleteAfterUpload: ['dist/**/*.map'],
      },
    }),
  ],
});
// next.config.js — @sentry/nextjs handles this automatically via the wizard
// but the relevant section looks like:
const { withSentryConfig } = require('@sentry/nextjs');

module.exports = withSentryConfig(nextConfig, {
  org: 'your-org-slug',
  project: 'your-project-slug',
  authToken: process.env.SENTRY_AUTH_TOKEN,
  silent: true,  // Suppress verbose output during build
  widenClientFileUpload: true,
});

User Context and Breadcrumbs

Setting user context is the equivalent of Application Insights’s TelemetryContext.User. In Sentry, you set it after authentication:

// NestJS — set user context in an auth guard or middleware
import * as Sentry from '@sentry/node';

// In your auth middleware after validating the JWT:
Sentry.setUser({
  id: user.id,
  email: user.email,
  username: user.username,
});

// Clear on logout
Sentry.setUser(null);
// Vue — set after Clerk/auth resolves the user
import * as Sentry from '@sentry/vue';
import { useUser } from '@clerk/vue';

// In a composable or app-level setup:
const { user } = useUser();
watch(user, (currentUser) => {
  if (currentUser) {
    Sentry.setUser({
      id: currentUser.id,
      email: currentUser.emailAddresses[0]?.emailAddress,
    });
  } else {
    Sentry.setUser(null);
  }
});

Breadcrumbs are the trail of events leading up to an error. Sentry collects them automatically (console logs, network requests, UI interactions in the browser), but you can add custom ones:

import * as Sentry from '@sentry/node';

// Custom breadcrumb — shows up in the event timeline
Sentry.addBreadcrumb({
  category: 'order',
  message: `Payment intent created for order ${orderId}`,
  level: 'info',
  data: {
    orderId,
    amount: amountCents,
  },
});

Manual Error Capture

For expected errors you want to track without letting them bubble up as unhandled exceptions:

import * as Sentry from '@sentry/node';

try {
  await externalPaymentService.charge(dto);
} catch (error) {
  // Capture with additional context
  Sentry.captureException(error, {
    tags: {
      feature: 'payments',
      gateway: 'stripe',
    },
    extra: {
      orderId: dto.orderId,
      amount: dto.amountCents,
    },
    level: 'error',
  });

  // Then handle gracefully — don't re-throw if you want to swallow it
  throw new InternalServerErrorException('Payment processing failed');
}

// Track non-exception events
Sentry.captureMessage('Payment gateway timeout — falling back to retry queue', 'warning');

Release Tracking and Environments

Release tracking is what turns “an error occurred” into “this error was introduced in release v1.4.2 and affects 12 users.” Set the release in your CI pipeline:

# .github/workflows/deploy.yml (GitHub Actions)
- name: Build and deploy
  env:
    SENTRY_RELEASE: ${{ github.repository }}@${{ github.sha }}
    SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
  run: |
    pnpm build
    # Source maps are uploaded automatically by the Sentry build plugin

- name: Create Sentry release
  uses: getsentry/action-release@v1
  env:
    SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
    SENTRY_ORG: your-org-slug
    SENTRY_PROJECT: your-project-slug
  with:
    environment: production
    version: ${{ github.repository }}@${{ github.sha }}

Performance Monitoring

Sentry’s performance monitoring works similarly to Application Insights distributed tracing. Each incoming request creates a transaction, and spans within it capture the time taken by database calls, HTTP requests, etc.

// Manual span for custom operations
import * as Sentry from '@sentry/node';

async function processLargeExport(jobId: string) {
  return Sentry.startSpan(
    {
      name: 'processLargeExport',
      op: 'job',
      attributes: { jobId },
    },
    async (span) => {
      const data = await fetchData();        // Automatically traced if using Prisma/fetch
      span.setAttribute('rowCount', data.length);

      const result = await transformData(data);
      return result;
    },
  );
}

Alerts and Issue Triage

Configure alerts in the Sentry UI under Project Settings → Alerts. Common patterns:

  • Alert when a new issue is first seen (requires a new release to ship it)
  • Alert when error rate exceeds N per minute
  • Alert when a resolved issue regresses (re-appears after being marked fixed)

Slack integration is configured per-organization under Settings → Integrations → Slack. You can route different projects or alert rules to different Slack channels.

Issue grouping uses fingerprinting — Sentry groups errors by stack trace signature. If grouping is wrong (two distinct bugs merged, or one bug split across multiple issues), you can customize fingerprints:

Sentry.captureException(error, {
  fingerprint: ['payment-gateway-timeout', dto.gateway],
  // Groups all payment timeout errors by gateway, instead of by stack trace
});

Key Differences

ConcernApplication InsightsSentryNotes
Primary focusMetrics + traces + errors (unified)Errors first, performance secondaryAI is better for dashboards; Sentry is better for debugging
SetupSDK + connection string, mostly automaticPer-layer SDK init, explicitMore setup, more control
Stack tracesReadable with symbol server or PDBsReadable with uploaded source mapsSource maps must be uploaded at build time
Error groupingBasic — same exception typeSmart fingerprinting by stack trace signatureSentry’s grouping is significantly better
User contextTelemetryContext.UserSentry.setUser()Identical concept
BreadcrumbsCustom events via TrackEventAutomatic + addBreadcrumb()Sentry auto-captures console, network, clicks
PerformanceDistributed tracing, dependency mapsTransactions + spansAI is richer for infra; Sentry is simpler to use
AlertingAzure Monitor alert rulesPer-project rules in Sentry UIBoth support Slack/email
Cost modelPer GB of data ingestedPer event, with generous free tierSentry’s free tier is usable for side projects
Self-hostableNo (Azure only)Yes (open source)Relevant if you have data residency requirements

Gotchas for .NET Engineers

Gotcha 1: Import Order Breaks Instrumentation

This is the most common setup mistake. Sentry’s Node.js SDK patches module internals at init() time using Node’s module system. If you import express, pg, axios, or any other module before calling Sentry.init(), Sentry cannot instrument them. Your stack traces will be incomplete and database spans will be missing.

// WRONG — NestFactory import loads express and http before Sentry can patch them
import { NestFactory } from '@nestjs/core';
import * as Sentry from '@sentry/node';

Sentry.init({ dsn: process.env.SENTRY_DSN });  // Too late — express already loaded
// CORRECT — instrument.ts is imported first, before any framework imports
// src/instrument.ts
import * as Sentry from '@sentry/node';
Sentry.init({ dsn: process.env.SENTRY_DSN });

// src/main.ts
import './instrument';   // ← First line in the file
import { NestFactory } from '@nestjs/core';

There is no equivalent gotcha in .NET because Application Insights hooks into the CLR, not module loading.

Gotcha 2: Source Maps Not Uploaded = Unreadable Production Errors

In development, stack traces are readable because you’re running unminified code. In production, your bundler minifies and renames everything. If you have not configured source map upload in your build pipeline, every production error arrives with obfuscated stack traces referencing e, t, and n instead of your actual function names.

Source maps must be uploaded to Sentry at build time, and they must be deleted from your deployment artifacts afterward. Shipping source maps publicly exposes your source code. The Sentry Vite and webpack plugins handle both upload and deletion automatically — use them.

Verify source maps are working: deliberately throw an error in a component, check Sentry, confirm the stack trace shows your file names and line numbers. Do this during your initial setup, not after the first production incident.

Gotcha 3: Sending 4xx Errors as Exceptions

Application Insights records everything. Sentry bills per event and its value is in signal-to-noise ratio. If you let every NotFoundException (404) and UnauthorizedException (401) flow to Sentry, you end up with thousands of meaningless events that bury real errors.

Filter these out in beforeSend (Sentry-wide) or in your exception filter (NestJS-specific):

// In your SentryExceptionFilter — only capture 500s
if (status >= 500) {
  Sentry.captureException(exception);
}

// Or in Sentry.init() — filter before the event is sent
beforeSend(event, hint) {
  const error = hint.originalException;
  if (error instanceof HttpException && error.getStatus() < 500) {
    return null;  // Drop this event
  }
  return event;
},

Gotcha 4: Environment and Release Are Not Automatic

In Application Insights, the environment is inferred from your Azure deployment slot. In Sentry, it is whatever string you pass to environment in Sentry.init(). If you ship to production without setting SENTRY_DSN, NODE_ENV, and SENTRY_RELEASE in your deployment environment, Sentry receives events tagged as development from a null release, which makes issue tracking and regression detection useless.

Set these in your deployment platform (Render, Vercel, etc.) and validate them in app startup:

// src/instrument.ts — fail fast if misconfigured in production
if (process.env.NODE_ENV === 'production' && !process.env.SENTRY_DSN) {
  console.error('SENTRY_DSN is not set in production — errors will not be tracked');
}

Gotcha 5: tracesSampleRate of 1.0 in Production Will Bankrupt You

tracesSampleRate: 1.0 means capture 100% of transactions for performance monitoring. This is correct for development. In production on any real traffic volume, it creates enormous event volume. Use 0.1 (10%) or lower in production, or use tracesSampler to sample dynamically based on the route:

tracesSampler: (samplingContext) => {
  // Always trace health checks at 0% — they're useless noise
  if (samplingContext.name === 'GET /health') return 0;
  // Always trace errors at 100%
  if (samplingContext.parentSampled) return 1.0;
  // Default: 10%
  return 0.1;
},

Hands-On Exercise

Set up Sentry end-to-end in your NestJS project.

  1. Create a free Sentry account at sentry.io and create a project (select Node.js for the backend, TypeScript).

  2. Install @sentry/node and create src/instrument.ts with the initialization configuration. Set SENTRY_DSN in your .env file.

  3. Import instrument.ts as the first line in src/main.ts.

  4. Create the SentryExceptionFilter global exception filter and register it in main.ts.

  5. Add Sentry.setUser() to your auth guard or JWT validation middleware.

  6. Deliberately trigger a 500 error in a controller (throw a plain new Error('test error')) and verify it appears in Sentry with the correct user context.

  7. Add beforeSend to filter out 4xx errors, then verify a 404 does not appear in Sentry.

  8. If you have a Vite frontend, add @sentry/vite-plugin to your vite.config.ts, configure source map upload, and verify that a frontend error shows readable TypeScript file references in Sentry.

Quick Reference

TaskCode
Initialize (Node.js)Sentry.init({ dsn, environment, release, tracesSampleRate })
Initialize (Vue)Sentry.init({ app, dsn, integrations: [browserTracingIntegration({ router })] })
Capture exceptionSentry.captureException(error, { tags, extra })
Capture messageSentry.captureMessage('text', 'warning')
Set user contextSentry.setUser({ id, email })
Clear user contextSentry.setUser(null)
Add breadcrumbSentry.addBreadcrumb({ category, message, level, data })
Custom spanSentry.startSpan({ name, op }, async (span) => { ... })
Custom fingerprintcaptureException(err, { fingerprint: ['custom-key', variable] })
Filter eventsbeforeSend(event, hint) { return null to drop }
Flush before process exitawait Sentry.flush(2000)

Environment Variables

VariableDescriptionRequired
SENTRY_DSNProject DSN from Sentry settingsYes
SENTRY_RELEASERelease identifier (e.g., api@abc1234)Recommended
SENTRY_AUTH_TOKENCLI token for source map uploadBuild only
NODE_ENVMaps to Sentry environmentYes
NEXT_PUBLIC_SENTRY_DSNClient-side DSN for Next.jsYes (frontend)

Package Reference

LayerPackage
NestJS / Node.js@sentry/node, @sentry/profiling-node
Next.js@sentry/nextjs
Vue 3 (Vite)@sentry/vue, @sentry/vite-plugin
Vite build plugin@sentry/vite-plugin
Next.js buildhandled by @sentry/nextjs

Further Reading

SonarCloud: Code Quality Analysis

For .NET engineers who know: SonarQube/SonarLint in Visual Studio, SonarScanner for .NET, quality gates in the Sonar dashboard, and code smell categories from the C# ruleset You’ll learn: How SonarCloud configuration for TypeScript projects mirrors what you already know from C#, and where the TS ruleset is weaker and requires ESLint to fill the gaps Time: 15-20 min read

The .NET Way (What You Already Know)

You have probably run SonarScanner against a C# project and watched the quality gate report appear in the Sonar dashboard. The workflow is: install dotnet-sonarscanner, run dotnet sonarscanner begin with project key and token, run dotnet build, run your tests with coverage collection, then run dotnet sonarscanner end. The scanner uploads the build output, test results, and coverage report to SonarCloud, which analyzes them and applies the quality gate.

The quality gate in C# typically checks: no new blocker or critical issues, code coverage above a threshold on new code, no new duplications above a threshold, and security hotspots reviewed. The Sonar C# ruleset is mature — it catches subtle issues like LINQ misuse, IDisposable not disposed, null reference patterns, and thread-safety violations.

SonarLint in Visual Studio gives you the same rules inline as you type. You can connect it to your SonarCloud project for synchronized rule configuration.

The SonarCloud Way

The transition for TypeScript is genuinely one of the easier ones in this manual. The concepts map directly: project setup, quality gates, code smell categories, security hotspots, and PR decoration all work the same. The main difference is the tool invocation (no dotnet build step) and the coverage report format.

Project Setup

Create a project in SonarCloud at sonarcloud.io. You can auto-import from GitHub, which is the fastest path — Sonar creates the project and configures the GitHub integration automatically.

You need a sonar-project.properties file at the repository root:

# sonar-project.properties
sonar.projectKey=your-org_your-project
sonar.organization=your-org
sonar.projectName=Your Project Name

# Source files to analyze
sonar.sources=src
# Test files — Sonar treats these differently (won't count coverage holes in tests)
sonar.tests=src
sonar.test.inclusions=**/*.spec.ts,**/*.test.ts,**/*.spec.tsx,**/*.test.tsx
# Exclude generated files, node_modules, build output
sonar.exclusions=**/node_modules/**,**/dist/**,**/*.d.ts,**/coverage/**

# Coverage report path — must be lcov format (Vitest and Jest both produce this)
sonar.javascript.lcov.reportPaths=coverage/lcov.info

# Source encoding
sonar.sourceEncoding=UTF-8

For a monorepo with multiple packages, specify multiple source roots:

# Monorepo with separate frontend and backend
sonar.sources=apps/web/src,apps/api/src
sonar.tests=apps/web/src,apps/api/src
sonar.test.inclusions=**/*.spec.ts,**/*.test.ts
sonar.exclusions=**/node_modules/**,**/dist/**,**/*.d.ts
sonar.javascript.lcov.reportPaths=apps/web/coverage/lcov.info,apps/api/coverage/lcov.info

CI/CD Integration (GitHub Actions)

# .github/workflows/sonar.yml
name: SonarCloud Analysis

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main, develop]

jobs:
  sonar:
    name: SonarCloud Scan
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0  # Shallow clones fail Sonar's blame data — always use full depth

      - uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: pnpm install --frozen-lockfile

      - name: Run tests with coverage
        run: pnpm test --coverage
        # For NestJS with Jest: pnpm jest --coverage --coverageReporters=lcov

      - name: SonarCloud Scan
        uses: SonarSource/sonarcloud-github-action@master
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}   # For PR decoration
          SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}

There is no dotnet sonarscanner begin/end dance. The GitHub Action handles everything — it invokes the scanner, passes the coverage report path from sonar-project.properties, and uploads results. The scanner auto-detects TypeScript.

Coverage Integration

Vitest produces LCOV coverage reports with minimal configuration:

// vitest.config.ts
import { defineConfig } from 'vitest/config';

export default defineConfig({
  test: {
    coverage: {
      provider: 'v8',          // Or 'istanbul' — both produce lcov
      reporter: ['text', 'lcov', 'html'],
      reportsDirectory: './coverage',
      // Exclude test files and generated types from coverage
      exclude: [
        'node_modules/**',
        'dist/**',
        '**/*.spec.ts',
        '**/*.test.ts',
        '**/*.d.ts',
      ],
    },
  },
});
// package.json — script that CI runs
{
  "scripts": {
    "test": "vitest run",
    "test:coverage": "vitest run --coverage"
  }
}

For NestJS with Jest:

// jest.config.js
module.exports = {
  moduleFileExtensions: ['js', 'json', 'ts'],
  rootDir: 'src',
  testRegex: '.*\\.spec\\.ts$',
  transform: { '^.+\\.(t|j)s$': 'ts-jest' },
  collectCoverageFrom: ['**/*.(t|j)s', '!**/*.spec.ts', '!**/main.ts'],
  coverageDirectory: '../coverage',
  coverageReporters: ['text', 'lcov'],
  testEnvironment: 'node',
};

The LCOV file at coverage/lcov.info is what Sonar reads. Verify it exists after running pnpm test:coverage before debugging Sonar coverage issues.

Quality Gate Configuration

Quality gates in SonarCloud work exactly as in SonarQube. Navigate to Organization → Quality Gates to create or edit gates. Our recommended thresholds for TypeScript projects:

MetricConditionThreshold
New code coverageLess than80%
New duplicated linesGreater than3%
New blocker issuesGreater than0
New critical issuesGreater than0
New security hotspots reviewedLess than100%
New code smellsRating worse thanA
New reliability issuesRating worse thanA

The “new code” focus is deliberate. Sonar’s default quality gate only checks new code introduced since the last analysis — the same logic as the .NET setup. This prevents the situation where legacy technical debt blocks every PR.

Code Smell Categories in TypeScript

The TypeScript ruleset covers similar categories to C#, with some notable gaps:

Cognitive complexity — Sonar measures function complexity the same way in TS as C#. Functions with complexity above 15 trigger a code smell. The threshold is configurable in the quality profile.

Duplications — Identical to C#. Copy-pasted blocks across files are detected and reported. SonarCloud’s UI shows the duplicated code side-by-side.

Unused variables and imports — Caught by Sonar, but ESLint’s @typescript-eslint/no-unused-vars rule catches these faster, inline in your editor. Let ESLint handle this; Sonar is redundant here.

Type assertions (as any) — Sonar flags as any type assertions, which are the TS equivalent of unsafe casts. Each one is a code smell worth reviewing.

var declarations — Sonar flags var in TypeScript. Use const/let. This should not come up if ESLint is configured with no-var.

Missing await — Sonar catches unhandled Promises and missing await keywords in async functions. This is a reliability issue, not just a style concern — unhandled rejections can crash the process.

// Sonar will flag this as a bug — Promise is not awaited
async function processOrder(id: number) {
  updateAuditLog(id);  // ← Missing await — rejection is silently swallowed
  return getOrder(id);
}

// Fixed
async function processOrder(id: number) {
  await updateAuditLog(id);
  return getOrder(id);
}

console.log in production code — Sonar flags console.log statements. This is a code smell in application code (use a proper logger). It is not flagged in test files.

Security Hotspot Review

Security hotspots in TypeScript match what you see in C#:

Hotspot CategoryTypeScript ExampleWhat to Check
SQL injectionTemplate literals in raw queriesEnsure parameterized queries
XSSinnerHTML, dangerouslySetInnerHTMLVerify input is sanitized
CryptographyUse of weak algorithmsEnsure bcrypt, not md5
AuthenticationJWT validation codeVerify algorithm, expiry, signature
CORScors({ origin: '*' })Ensure restricted origins in production
Environment variablesDirect process.env accessVerify secrets are not logged

Hotspots require a human review decision in the Sonar UI — “Safe,” “Acknowledge,” or “Fixed.” They do not block the quality gate unless “Security Hotspot Review Rate” is in your gate configuration. Add it — unreviewed hotspots are the point of the feature.

PR Decoration

When the CI job runs on a pull request, SonarCloud posts inline comments on the changed lines that have new issues, plus a summary comment with the quality gate result:

Quality Gate: FAILED
New Code Issues: 3 code smells, 1 bug
Coverage on New Code: 74% (required: 80%)

This requires:

  1. GITHUB_TOKEN in your CI environment (GitHub Actions provides this automatically)
  2. SonarCloud GitHub App installed on the repository
  3. The PR trigger in your workflow (on: pull_request)

The PR decoration in SonarCloud is indistinguishable from the SonarQube behavior you know from .NET. The inline comments appear as regular GitHub review comments from the “SonarCloud” bot.

SonarLint IDE Integration

SonarLint for VS Code and WebStorm shows Sonar issues inline as you type, without running the full scanner. Install the SonarLint extension and connect it to your SonarCloud project to synchronize the quality profile (so local rules match what CI will report).

VS Code: Extensions → SonarLint → Connected Mode → Add Connection → SonarCloud
WebStorm: Settings → Tools → SonarLint → Project Binding → Bind to SonarCloud project

Connected mode pulls the quality profile from your project, so you see the same rules locally that CI will enforce.

Key Differences

ConcernSonarQube/Scanner for .NETSonarCloud for TypeScriptNotes
Scanner invocationdotnet sonarscanner begin/endGitHub Action or sonar-scanner CLINo build step needed
Coverage formatOpenCover or Cobertura XMLLCOV (from Vitest or Jest)Both are widely supported
Rule maturityVery mature — catches subtle CLR bugsGood but less matureSupplement TS with ESLint
Language analysisCompiled output analyzedSource files analyzed directlyTypeScript is analyzed without compilation
Ruleset sourceBuilt-in + Roslyn rulesBuilt-in + community TS rulesESLint fills gaps
Build requirementRequires dotnet buildNo build requiredSonar reads TS source directly
Monorepo supportPer-project analysisSingle scanner run with multi-path configConfigure sonar.sources with multiple paths
Fetch depth in CIUsually not an issueMust be full depth (fetch-depth: 0)Shallow clone breaks blame data

Gotchas for .NET Engineers

Gotcha 1: SonarCloud TS Rules Are Good, Not Great — ESLint Is Not Optional

The C# ruleset in Sonar is exceptionally mature. It catches thread-safety bugs, LINQ misuse, and subtle nullability issues that took years of community contribution to encode. The TypeScript ruleset is solid for the basics but misses many patterns that @typescript-eslint catches.

Do not treat SonarCloud as a replacement for ESLint on TypeScript projects. They are complementary:

  • ESLint: Fast, runs in editor, catches type-unsafe patterns, enforces team conventions
  • SonarCloud: Historical tracking, PR decoration, quality gate enforcement, coverage integration, security hotspots

If you only set up one, set up ESLint. If you set up both, configure them to avoid duplicate reporting of the same issues. Disable Sonar rules that are also enforced by ESLint so you only see each issue in one place.

Gotcha 2: Shallow Clone Breaks Sonar Analysis

GitHub Actions uses actions/checkout@v4 which performs a shallow clone by default (fetch-depth: 1). SonarCloud requires the full git history to compute blame data, identify new vs. existing code, and track when issues were introduced. Without full history, Sonar cannot determine what is “new code” — it treats everything as new, which makes the quality gate meaningless.

Always use fetch-depth: 0 in your checkout step:

- uses: actions/checkout@v4
  with:
    fetch-depth: 0   # Required for SonarCloud — never omit this

There is no equivalent issue in the .NET scanner because dotnet sonarscanner is typically run in environments with full checkout configured separately.

Gotcha 3: LCOV Report Path Must Exist Before the Scanner Runs

Sonar will not fail if the coverage report does not exist — it silently reports 0% coverage, which causes the quality gate to fail on coverage. This happens when:

  • The test run fails before writing the coverage file
  • The coverage reporter is not configured to produce LCOV (only HTML or text)
  • The path in sonar-project.properties does not match the actual file location

Debug this by checking whether coverage/lcov.info exists after the test step runs, before the Sonar step runs. Add a verification step in CI:

- name: Run tests with coverage
  run: pnpm test:coverage

- name: Verify coverage report exists
  run: |
    if [ ! -f coverage/lcov.info ]; then
      echo "ERROR: coverage/lcov.info not found"
      exit 1
    fi
    echo "Coverage report size: $(wc -l < coverage/lcov.info) lines"

Gotcha 4: Quality Gate Applies to New Code, Not Total Code

This is the same in .NET but worth repeating because engineers new to Sonar try to apply the gate to total coverage first. The default Sonar way is to measure coverage on new lines only — lines changed or added in the current branch compared to the main branch.

If you configure the gate on “overall” coverage instead of “new code” coverage, you will block every PR until you retrofit tests for all existing code. Use “new code” metrics in your gate, and use the “Coverage Trend” widget in the Sonar dashboard to monitor overall coverage separately.

Gotcha 5: Security Hotspots Do Not Block PRs by Default

Unlike issues (bugs, code smells), security hotspots do not block the quality gate unless you add a “Security Hotspot Review Rate” condition. Developers often assume that a security hotspot showing up in the PR will block merge — it will not, unless you configure it to.

Add this to your quality gate:

  • Security Hotspots Reviewed: is less than100% on new code

Then enforce the review workflow: every hotspot that appears on a PR must be reviewed (marked “Safe,” “Acknowledged,” or “Fixed”) before the quality gate passes. This is the intended workflow and mirrors how security reviewers would handle it in Azure DevOps with SonarQube server.

Hands-On Exercise

Configure SonarCloud for your NestJS or Next.js project from scratch.

  1. Create a SonarCloud account and import your repository from GitHub. Note the project key and organization slug.

  2. Create sonar-project.properties at the repository root with correct source paths, test inclusion patterns, and the LCOV coverage path.

  3. Add the Sonar GitHub Actions workflow. Run it against your main branch and confirm the first analysis completes. Check that the dashboard shows source files and that coverage is not 0%.

  4. Configure a quality gate with the thresholds from the table above. Assign it to your project under Project Settings → Quality Gate.

  5. Create a feature branch, introduce a deliberate code smell (a function with high cognitive complexity — nest six if statements), open a PR, and verify that SonarCloud posts a PR decoration comment identifying the issue.

  6. Install SonarLint in your editor, connect it to your SonarCloud project in connected mode, and confirm that the same issue appears inline in your editor before pushing.

  7. Investigate what issues exist in your current codebase. Read through the Security Hotspots tab and mark each one with a review decision.

Quick Reference

TaskCommand / Config
Run scanner locallynpx sonar-scanner (with sonar-project.properties)
Set project keysonar.projectKey=org_project in sonar-project.properties
Set coverage path (Vitest)sonar.javascript.lcov.reportPaths=coverage/lcov.info
Set source directorysonar.sources=src
Exclude filessonar.exclusions=**/node_modules/**,**/dist/**
Exclude test files from sourcesonar.test.inclusions=**/*.spec.ts,**/*.test.ts
Full git fetch in CIfetch-depth: 0 in actions/checkout
Sonar token env varSONAR_TOKEN in GitHub Secrets
PR decoration tokenGITHUB_TOKEN (auto-provided by GitHub Actions)

sonar-project.properties Template

sonar.projectKey=your-org_your-project
sonar.organization=your-org
sonar.projectName=Your Project

sonar.sources=src
sonar.tests=src
sonar.test.inclusions=**/*.spec.ts,**/*.test.ts,**/*.spec.tsx,**/*.test.tsx
sonar.exclusions=**/node_modules/**,**/dist/**,**/*.d.ts,**/coverage/**,**/*.config.ts

sonar.javascript.lcov.reportPaths=coverage/lcov.info
sonar.sourceEncoding=UTF-8

Vitest Coverage Configuration

// vitest.config.ts
coverage: {
  provider: 'v8',
  reporter: ['text', 'lcov'],
  reportsDirectory: './coverage',
  exclude: ['**/*.spec.ts', '**/*.test.ts', '**/*.d.ts', 'node_modules/**'],
}

GitHub Actions Workflow

- uses: actions/checkout@v4
  with:
    fetch-depth: 0
- run: pnpm test:coverage
- uses: SonarSource/sonarcloud-github-action@master
  env:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
    SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}

Further Reading

Snyk: Dependency Vulnerability Scanning

For .NET engineers who know: OWASP Dependency-Check, Dependabot, NuGet audit (dotnet list package --vulnerable), and the NuGet advisory database You’ll learn: How Snyk scans npm dependency trees for vulnerabilities, auto-generates fix PRs, and integrates into CI to block deployments on high-severity findings — and why the npm dependency model makes transitive vulnerability management more complex than NuGet Time: 15-20 min read

The .NET Way (What You Already Know)

In the .NET ecosystem, dependency vulnerabilities come to your attention through several channels. GitHub Dependabot raises PRs to update vulnerable packages. dotnet list package --vulnerable queries the NuGet advisory database from the command line. OWASP Dependency-Check runs in CI and generates an HTML report with CVE references. If you use Azure DevOps, the Microsoft Security DevOps extension runs these scans automatically.

The NuGet package ecosystem has characteristics that make vulnerability management tractable. Package trees tend to be shallow — most .NET packages have few transitive dependencies compared to npm. NuGet’s lock files (packages.lock.json) are deterministic. Vulnerability data comes from a single authoritative source: the NuGet advisory database backed by GitHub’s advisory database.

# Check for vulnerable packages in .NET
dotnet list package --vulnerable --include-transitive

# Output:
# The following sources were used:
#    https://api.nuget.org/v3/index.json
#
# Project `MyApp`
# [net8.0]:
#    Top-level Package      Requested   Resolved   Severity   Advisory URL
#    > Newtonsoft.Json      12.0.1      12.0.1     High       https://github.com/advisories/GHSA-...

The Snyk Way

Snyk covers the same problem — finding vulnerable dependencies — but the npm ecosystem presents it with considerably more complexity. Before looking at the tool, you need to understand why.

The npm Dependency Tree Problem

A NuGet package typically has 5-20 transitive dependencies. An npm package commonly has 50-500. react-scripts alone installs over 1,300 packages. This is not an exaggeration:

# A typical Next.js project
$ npm list --all 2>/dev/null | wc -l
1847

This means vulnerability management in npm is a different class of problem. A vulnerability in a deep transitive dependency — one you have never heard of, in a package three levels down — can appear in your security report. You may not control the upgrade path because you do not directly depend on the vulnerable package.

Snyk addresses this with priority scoring, fix PRs that update the direct dependency responsible for pulling in the vulnerable transitive, and “accept” workflows for vulnerabilities without a fix.

Installing and Running Snyk CLI

# Install globally
npm install -g snyk

# Authenticate (opens browser for OAuth)
snyk auth

# Or authenticate with a token (for CI)
snyk auth $SNYK_TOKEN

Running a scan:

# Scan the current project's package.json and lockfile
snyk test

# Scan and output JSON (for CI parsing)
snyk test --json

# Scan and fail only on high or critical severity
snyk test --severity-threshold=high

# Scan including dev dependencies
snyk test --dev

# Show detailed vulnerability report
snyk test --all-projects  # Scans all package.json files in a monorepo

Example output:

Testing /path/to/project...

✗ High severity vulnerability found in semver
  Description: Regular Expression Denial of Service (ReDoS)
  Info: https://snyk.io/vuln/SNYK-JS-SEMVER-3247795
  Introduced through: node-gyp@9.4.0 > semver@5.7.1
  Fix: Upgrade node-gyp to version 9.4.1 or higher (semver will be upgraded to 5.7.2)
  Fixable? Yes (auto-fixable)

✗ Critical severity vulnerability found in loader-utils
  Description: Prototype Pollution
  Info: https://snyk.io/vuln/SNYK-JS-LOADERUTILS-3043103
  Introduced through: react-scripts@5.0.1 > loader-utils@1.4.0
  Fix: No fix available — loader-utils@1.x is unmaintained
  Fixable? No

✓ Tested 1,847 dependencies for known issues, found 2 issues.

The output is similar to dotnet list package --vulnerable --include-transitive, but with a critical addition: Snyk traces the introduction path (“Introduced through”) and tells you which direct dependency to upgrade to fix the transitive issue.

Vulnerability Report Structure

Each Snyk finding includes:

FieldDescription
SeverityCritical, High, Medium, Low
CVE/CWEReference IDs for the vulnerability
CVSS scoreNumeric severity (0-10)
Exploit maturityProof-of-concept, functional, no known exploit
Fix availabilityWhether a patched version exists
Introduction pathChain of packages that introduced this dependency
FixableWhether snyk fix can automatically resolve it

The exploit maturity field is important. A High severity vulnerability with “No known exploit” in a transitive testing dependency is meaningfully different from a Critical with “Functional exploit” in your production HTTP server. Snyk’s priority score combines severity, exploit maturity, and whether the vulnerable code path is actually reachable in your project.

Auto-Fix PRs

Snyk’s most useful feature for large teams is auto-fix PRs. Snyk monitors your repository continuously (via GitHub integration, not CI) and raises PRs when:

  • A new vulnerability is discovered in an existing dependency
  • A fix becomes available for a previously unfixable vulnerability
# Fix all auto-fixable vulnerabilities locally
snyk fix

# What snyk fix does:
# 1. Identifies vulnerable packages with available fixes
# 2. Determines the minimum upgrade to resolve the vulnerability
# 3. Updates package.json and runs pnpm install / npm install
# 4. Verifies the fix does not break your tests (if configured)

To enable automatic PRs in GitHub:

  1. Connect your GitHub repository at app.snyk.io
  2. Navigate to Settings → Integrations → GitHub
  3. Enable “Automatic fix PRs” and “New vulnerabilities”

Snyk will raise PRs like:

[Snyk] Security upgrade axios from 0.21.1 to 0.21.4

This PR was automatically created by Snyk to fix 1 vulnerability.

Vulnerability: Server-Side Request Forgery (SSRF)
Severity: High
CVE: CVE-2023-45857
Fixed in: axios@0.21.4

See https://snyk.io/vuln/SNYK-JS-AXIOS-... for details.

License Compliance

Snyk scans license metadata for every dependency. This is relevant when your project has commercial distribution obligations:

# Check licenses
snyk test --print-deps

# Or configure license policies at the organization level in app.snyk.io
# under Settings → Licenses

You can configure license policies to:

  • Allow: MIT, Apache-2.0, ISC, BSD-2-Clause, BSD-3-Clause (typical commercial use)
  • Warn: LGPL-2.0, LGPL-3.0 (linkage restrictions)
  • Fail: GPL-2.0, GPL-3.0, AGPL-3.0 (copyleft — review required)

Container Scanning

If you ship a Docker image (common for NestJS deployments):

# Scan a local Docker image
snyk container test your-image:latest

# Scan with fix recommendations
snyk container test your-image:latest --file=Dockerfile

# Output includes OS-level CVEs (from the base image) and npm vulnerabilities

This is the equivalent of scanning both the host system dependencies and your application dependencies together — something that dotnet list package --vulnerable does not do because .NET rarely ships in containers with OS-level attack surface.

CI/CD Integration

The standard pattern is to fail the build on high or critical severity vulnerabilities and warn on medium or lower:

# .github/workflows/security.yml
name: Security Scan

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]
  schedule:
    - cron: '0 0 * * 1'   # Also run weekly — new CVEs appear on existing deps

jobs:
  snyk:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: pnpm install --frozen-lockfile

      - name: Run Snyk vulnerability scan
        uses: snyk/actions/node@master
        env:
          SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
        with:
          args: --severity-threshold=high --all-projects
        # Exit code 1 = vulnerabilities found above threshold → build fails
        # Exit code 0 = clean or only below-threshold issues → build passes

      - name: Upload Snyk results to GitHub Code Scanning
        uses: github/codeql-action/upload-sarif@v3
        if: always()   # Upload even if Snyk found issues
        with:
          sarif_file: snyk.sarif

For the .snyk ignore file (the equivalent of a suppression list):

# .snyk
# Suppress false positives or accepted risks
version: v1.25.0

ignore:
  SNYK-JS-SEMVER-3247795:
    - '*':
        reason: >
          This vulnerability is in a test-only devDependency and is not
          reachable from production code. Risk accepted until the dependency
          ships a fix.
        expires: '2026-06-01T00:00:00.000Z'  # Review this date

Monitoring and Continuous Scanning

Beyond CI, Snyk can monitor your deployed application for new vulnerabilities discovered after your last scan:

# Register your project for ongoing monitoring
snyk monitor

# This uploads a snapshot of your dependency tree to Snyk's servers.
# When a new CVE is disclosed for any of your dependencies, Snyk
# sends an alert and (if configured) raises a fix PR.

Run snyk monitor at the end of your deployment pipeline to keep the snapshot current.

Key Differences

Concern.NET (Dependabot / OWASP DC)SnykNotes
Vulnerability sourceGitHub Advisory Database, NuGet advisoriesSnyk’s own database (broader than NVD)Snyk often has vulnerabilities before NVD
Transitive depthManageable — NuGet trees are shallowComplex — npm trees are 100s of packages deepSnyk traces introduction paths
Auto-fix PRsDependabot raises version bump PRsSnyk raises targeted fix PRs with contextSnyk PRs include CVE details and rationale
License scanningNot built into DependabotBuilt into SnykLicense policies configurable per org
Container scanningSeparate tool (Trivy, etc.)Integrated into SnykSame tool, one license
Exploit maturityNot provided by DependabotProvided and factored into priority scoreUseful for triaging large backlogs
Suppression mechanismdependabot.yml ignore rules.snyk ignore file with expiry datesExpiry dates enforce review cadence
Fix mechanismUpdates packages.json versionsnyk fix command + PR automationEquivalent outcomes
CostFree (GitHub-native)Free tier (200 tests/month), paid for teamsEvaluate based on repo count

Gotchas for .NET Engineers

Gotcha 1: The npm Dependency Tree Makes “Fix This Vulnerability” Non-Trivial

In NuGet, if Newtonsoft.Json 12.0.1 is vulnerable, you upgrade it to 13.0.1 in your .csproj. Done. The package is yours to control.

In npm, the vulnerable package may be four levels deep in the dependency tree, owned by a package you do not control. Consider this chain:

your-app
└── react-scripts@5.0.1
    └── webpack@5.88.0
        └── loader-utils@1.4.0   ← vulnerable

You cannot upgrade loader-utils directly because you do not depend on it directly. Your options are:

  1. Upgrade react-scripts to a version that pulls in a fixed loader-utils
  2. Use npm’s overrides (or pnpm’s overrides) to force a specific version of a transitive dependency
  3. Accept the risk if the code path is not reachable

Snyk will tell you which option is available for each vulnerability. Option 2 looks like:

// package.json — pnpm overrides
{
  "pnpm": {
    "overrides": {
      "loader-utils": "^3.2.1"
    }
  }
}

Use overrides carefully — they can cause peer dependency conflicts. Snyk’s fix PRs handle this automatically when it is safe to do so.

Gotcha 2: High Vulnerability Count Is Normal — Triage by Reachability

When you first run snyk test on an existing npm project, you may see 20-50 vulnerabilities. This is not a crisis. Most of them are in devDependencies (build tools, test runners) that never execute in production. Many are in deeply transitive packages that are not reachable from your code paths.

Triage strategy:

  1. Filter by --severity-threshold=high first — only High and Critical matter immediately
  2. Check if the vulnerable package is in dependencies vs. devDependencies — devDep vulnerabilities are lower priority
  3. Check exploit maturity — “No known exploit” is lower priority than “Functional exploit”
  4. Check reachability — Snyk’s paid tier includes reachability analysis to confirm whether the vulnerable function is actually called

Do not aim for zero vulnerabilities immediately. Aim for zero High/Critical in production dependencies. Set a quality gate for --severity-threshold=high in CI and accept the rest with documented rationale and expiry dates in .snyk.

Gotcha 3: pnpm install Does Not Automatically Fix Vulnerabilities

npm audit fix (and its pnpm equivalent pnpm audit --fix) attempts to automatically fix vulnerabilities by upgrading packages. But unlike snyk fix, it does not understand introduction paths or minimum required upgrades — it can introduce breaking changes by upgrading to major versions unexpectedly.

Prefer snyk fix over npm audit fix because Snyk’s fix understands the minimum upgrade required and verifies that the fix is safe before applying it. If you use pnpm, configure Snyk with:

snyk test --package-manager=pnpm
snyk fix --package-manager=pnpm

Gotcha 4: SNYK_TOKEN Must Never Be in Source Code

This is obvious but worth stating explicitly because the failure mode is severe. If SNYK_TOKEN is committed to the repository, anyone who forks the repository can exhaust your Snyk API quota and access your vulnerability reports (which describe exactly what is vulnerable in your application).

Store it in GitHub Actions Secrets (${{ secrets.SNYK_TOKEN }}), never in .env or CI configuration files committed to the repository. Rotate it immediately if it is ever exposed.

Gotcha 5: npm Supply Chain Attacks Are Not the Same as CVE-Based Vulnerabilities

Snyk scans against known CVE databases. npm supply chain attacks — where a malicious package impersonates a legitimate one (typosquatting), or a legitimate maintainer account is compromised and a malicious version is published — are not in the CVE database by definition. They are zero-day events.

Snyk’s snyk monitor provides some protection by alerting when a dependency’s hash changes unexpectedly. The more complete mitigations are:

  • Lock files committed to the repository (pnpm-lock.yaml)
  • --frozen-lockfile in CI (pnpm) or npm ci — never install without a lockfile
  • Package provenance verification (available in npm v9+ and pnpm v8+)
  • Private registry with vetted package mirrors for high-security environments

These mitigations are beyond what Snyk provides out of the box, but Snyk’s documentation covers them.

Hands-On Exercise

Run a full Snyk audit on your project and establish a security baseline.

  1. Install the Snyk CLI and authenticate with your Snyk account.

  2. Run snyk test and review the full output. Note the total count of vulnerabilities by severity.

  3. Run snyk test --severity-threshold=high to see only High and Critical findings. This is your actionable baseline.

  4. For each High/Critical finding:

    • Determine whether it is in dependencies or devDependencies
    • Read the Snyk advisory link to understand the vulnerability
    • Check if a fix is available (Snyk will indicate this)
    • Either apply the fix with snyk fix or add a suppression to .snyk with a rationale and expiry date
  5. Add the Snyk GitHub Actions workflow to your repository. Create a SNYK_TOKEN secret from your Snyk account settings. Push and verify the CI job runs and passes.

  6. Connect your repository in the Snyk UI at app.snyk.io and enable automatic fix PRs. Verify a PR appears if Snyk has any auto-fixable issues.

  7. Run snyk monitor at the end of your local workflow to register your project for ongoing monitoring.

Quick Reference

TaskCommand
Authenticatesnyk auth
Scan projectsnyk test
Scan with thresholdsnyk test --severity-threshold=high
Scan all projects (monorepo)snyk test --all-projects
Auto-fix vulnerabilitiessnyk fix
Register for monitoringsnyk monitor
Scan container imagesnyk container test image:tag
Output JSONsnyk test --json > snyk-report.json
Scan dev deps toosnyk test --dev
Ignore a vulnerabilityAdd to .snyk with reason and expiry

.snyk Ignore Template

version: v1.25.0
ignore:
  SNYK-JS-EXAMPLE-1234567:
    - '*':
        reason: >
          This vulnerability is in a devDependency not reachable in production.
          Review when the dependency ships a fix.
        expires: '2026-09-01T00:00:00.000Z'

pnpm Override Template

{
  "pnpm": {
    "overrides": {
      "vulnerable-transitive-package": ">=safe-version"
    }
  }
}

GitHub Actions Snippet

- name: Snyk Security Scan
  uses: snyk/actions/node@master
  env:
    SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
  with:
    args: --severity-threshold=high --all-projects

Further Reading

Semgrep: Static Analysis and Custom Rules

For .NET engineers who know: Roslyn Analyzers, DiagnosticAnalyzer, diagnostic descriptors, code fix providers, and the process of publishing custom analyzer NuGet packages You’ll learn: How Semgrep provides the same capability — pattern-based static analysis with custom rules — without requiring you to compile anything, using YAML rules that match AST patterns across TypeScript, React, and Node.js code Time: 15-20 min read

The .NET Way (What You Already Know)

Roslyn Analyzers are the extensibility point for custom static analysis in C#. You write a class that implements DiagnosticAnalyzer, register the syntax node types you want to inspect, walk the syntax tree using Roslyn’s symbol model, and emit Diagnostic instances when your rule fires. The analyzer ships as a NuGet package that gets loaded into the compiler. When a violation occurs, the IDE shows a squiggly line; in CI, the build fails with a diagnostic error.

Writing a custom Roslyn Analyzer is powerful and precise, but has real friction. You need a separate C# project, you work against Roslyn’s symbol API (which has a learning curve), and distributing the analyzer requires a NuGet package. For enforcing team conventions (“always use our Result<T> type instead of raw exceptions in service methods”), the overhead often outweighs the benefit.

// A minimal Roslyn Analyzer — considerable boilerplate for a simple rule
[DiagnosticAnalyzer(LanguageNames.CSharp)]
public class NoRawStringConnectionStringAnalyzer : DiagnosticAnalyzer
{
    private static readonly DiagnosticDescriptor Rule = new(
        id: "SV001",
        title: "Connection string should not be a raw string literal",
        messageFormat: "Use IConfiguration to read the connection string instead of hardcoding it",
        category: "Security",
        defaultSeverity: DiagnosticSeverity.Error,
        isEnabledByDefault: true
    );

    public override ImmutableArray<DiagnosticDescriptor> SupportedDiagnostics =>
        ImmutableArray.Create(Rule);

    public override void Initialize(AnalysisContext context)
    {
        context.RegisterSyntaxNodeAction(
            AnalyzeStringLiteral,
            SyntaxKind.StringLiteralExpression
        );
    }

    private static void AnalyzeStringLiteral(SyntaxNodeAnalysisContext context)
    {
        var literal = (LiteralExpressionSyntax)context.Node;
        var value = literal.Token.ValueText;
        if (value.Contains("Server=") || value.Contains("Data Source="))
        {
            context.ReportDiagnostic(Diagnostic.Create(Rule, literal.GetLocation()));
        }
    }
}

That is a lot of infrastructure for “don’t hardcode connection strings.” Semgrep lets you write the equivalent rule in eight lines of YAML.

The Semgrep Way

Semgrep is a static analysis tool that matches code patterns using a syntax-aware AST matching engine. Instead of walking an AST in code, you write patterns that look like the code you want to match — with metavariables ($X, $FUNC) as wildceholders. The engine handles the language parsing.

The key mental model shift: Semgrep rules are data (YAML), not code. They are readable, version-controlled alongside your application, and require no compilation or packaging.

Installing Semgrep

# Install via pip (the primary distribution channel)
pip3 install semgrep

# Or via Homebrew on macOS
brew install semgrep

# Verify installation
semgrep --version

# No authentication needed for open-source rules or local rules
# For Semgrep's managed service (semgrep.dev), authenticate:
semgrep login

Running Your First Scan

# Run Semgrep against the current directory using community rules
# The auto ruleset selects rules appropriate for the detected languages
semgrep --config=auto .

# Run a specific ruleset from the registry
semgrep --config=p/typescript .
semgrep --config=p/react .
semgrep --config=p/nodejs .
semgrep --config=p/owasp-top-ten .

# Run your own rules file
semgrep --config=.semgrep/rules.yml .

# Run all rules in a directory
semgrep --config=.semgrep/ .

# Fail with exit code 1 if any findings exist (for CI)
semgrep --config=.semgrep/ --error .

Rule Structure

A Semgrep rule has three required fields: an id, a pattern (or pattern combination), and a message. Everything else is optional metadata.

# .semgrep/rules.yml
rules:
  - id: no-hardcoded-connection-string
    patterns:
      - pattern: |
          "Server=$X;..."
      - pattern: |
          `Server=${...}...`
    message: >
      Hardcoded connection string detected. Read database credentials from
      environment variables via process.env or the config service.
    languages: [typescript, javascript]
    severity: ERROR

  - id: no-console-log-in-services
    pattern: console.log(...)
    paths:
      include:
        - 'src/*/services/**'
        - 'src/services/**'
    paths:
      exclude:
        - '**/*.spec.ts'
        - '**/*.test.ts'
    message: >
      Use the injected Logger service instead of console.log in service classes.
      console.log bypasses structured logging and log level configuration.
    languages: [typescript]
    severity: WARNING
    fix: |
      this.logger.log(...)

Pattern Syntax

Semgrep’s pattern language is designed to look like the code it matches. The key constructs:

SyntaxMeaningExample
$XMetavariable — matches any expression$X.password
$...ARGSSpread metavariable — matches zero or more argsfn($...ARGS)
...Ellipsis — matches zero or more statementstry { ... } catch { ... }
patternSingle pattern to matchDirect match
patternsAll patterns must match (AND)Multiple conditions
pattern-eitherAny pattern must match (OR)Alternatives
pattern-notExclude matches of this patternNegation
pattern-insideMatch only inside this contextScope restriction
pattern-not-insideMatch only outside this contextScope exclusion
metavariable-regexConstrain a metavariable to a regex$VAR where name matches pattern
rules:
  # Matches any method call where the argument contains user input going into a
  # raw SQL string — the AND combinator (patterns:)
  - id: no-raw-sql-with-user-input
    patterns:
      - pattern: $DB.query($QUERY)
      - pattern-either:
        - pattern: $QUERY = `${$INPUT}...`
        - pattern: $QUERY = $A + $INPUT
      - pattern-not-inside: |
          // nosemgrep
    message: Potential SQL injection — use parameterized queries
    languages: [typescript, javascript]
    severity: ERROR

  # Matches dangerouslySetInnerHTML with any non-sanitized value
  - id: no-dangerous-innerhtml
    pattern: <$EL dangerouslySetInnerHTML={{ __html: $X }} />
    pattern-not: <$EL dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize($X) }} />
    message: >
      dangerouslySetInnerHTML with unsanitized content is an XSS vector.
      Wrap with DOMPurify.sanitize() or use a safer alternative.
    languages: [typescript, tsx]
    severity: ERROR

  # Matches JWT verification without algorithm specification
  - id: jwt-verify-without-algorithm
    pattern: jwt.verify($TOKEN, $SECRET)
    pattern-not: jwt.verify($TOKEN, $SECRET, { algorithms: [...] })
    message: >
      JWT verification without specifying algorithms allows algorithm confusion
      attacks. Explicitly specify: { algorithms: ['HS256'] }
    languages: [typescript, javascript]
    severity: ERROR

Community Rules for TypeScript and Node.js

Semgrep’s registry at semgrep.dev has hundreds of vetted rules for the JS/TS ecosystem. The most useful rulesets for our stack:

# TypeScript-specific rules (type assertion abuse, unused vars, etc.)
semgrep --config=p/typescript .

# React security and best practices
semgrep --config=p/react .

# Node.js security (command injection, path traversal, etc.)
semgrep --config=p/nodejs .

# Express.js specific patterns
semgrep --config=p/express .

# JWT security (the rules from the Gotchas section, from the registry)
semgrep --config=p/jwt .

# OWASP Top 10 mapped rules
semgrep --config=p/owasp-top-ten .

# Supply chain / secrets (complements Snyk)
semgrep --config=p/secrets .

Review the rules in a ruleset before adopting them. Each rule has a page on semgrep.dev with examples of what it matches and the rationale. Treat the registry as a starting point, not a complete solution — our custom rules enforce conventions the community rulesets cannot know about.

Writing Custom Rules for Team Conventions

This is where Semgrep earns its keep. Roslyn Analyzers enforce C# conventions that are too expensive to review manually in PRs. The same applies here.

Convention: Always use the injected LoggerService, never console.log

- id: use-logger-service-not-console
  pattern-either:
    - pattern: console.log($...ARGS)
    - pattern: console.warn($...ARGS)
    - pattern: console.error($...ARGS)
    - pattern: console.debug($...ARGS)
  paths:
    include:
      - 'src/**'
    exclude:
      - '**/*.spec.ts'
      - '**/*.test.ts'
      - 'src/instrument.ts'   # Sentry init before logger is available
  message: >
    Use the NestJS Logger service instead of console.*. Inject Logger with
    `private readonly logger = new Logger(MyService.name)` and use
    this.logger.log(), this.logger.warn(), this.logger.error().
  languages: [typescript]
  severity: WARNING

Convention: Never use any type assertion except in test files

- id: no-any-type-assertion
  pattern-either:
    - pattern: $X as any
    - pattern: <any>$X
  paths:
    include:
      - 'src/**'
    exclude:
      - '**/*.spec.ts'
      - '**/*.test.ts'
  message: >
    Avoid `as any` — it disables TypeScript's type safety at this call site.
    Use a specific type, `as unknown`, or a type guard instead.
  languages: [typescript]
  severity: WARNING

Convention: Zod must validate external input at API boundaries

- id: missing-zod-validation-on-request-body
  patterns:
    - pattern: |
        @Post(...)
        async $METHOD(@Body() $DTO: $TYPE) {
          ...
        }
    - pattern-not: |
        @Post(...)
        async $METHOD(@Body() $DTO: $TYPE) {
          const $PARSED = $SCHEMA.parse($DTO);
          ...
        }
  message: >
    POST handler receives a body parameter without Zod schema validation.
    Add `const parsed = CreateXxxSchema.parse(dto)` before using the body,
    or apply the ZodValidationPipe globally.
  languages: [typescript]
  severity: WARNING

Convention: Never call process.exit() in application code

- id: no-process-exit-in-application-code
  pattern: process.exit($CODE)
  paths:
    include:
      - 'src/**'
    exclude:
      - 'src/main.ts'   # Acceptable only at the top-level bootstrap
  message: >
    Do not call process.exit() in application code — it prevents NestJS from
    running shutdown hooks and can leave database connections open.
    Throw an exception and let the global exception filter handle it.
  languages: [typescript]
  severity: ERROR

Ignoring False Positives

Add an inline comment to suppress a specific rule at a specific line:

// nosemgrep: no-any-type-assertion
const response = await fetch(url) as any;

// Or suppress all rules at this line:
// nosemgrep
const legacyData = JSON.parse(raw) as any;

For file-level suppression:

// nosemgrep: no-console-log-in-services
// Reason: This file is a Sentry initialization shim that runs before
// the NestJS Logger is available.

False positives in rule design should be addressed in the rule itself, not suppressed at call sites. If you are suppressing the same rule frequently, the rule is too broad.

CI Integration

# .github/workflows/semgrep.yml
name: Semgrep

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main, develop]

jobs:
  semgrep:
    name: Static Analysis
    runs-on: ubuntu-latest
    container:
      image: returntocorp/semgrep

    steps:
      - uses: actions/checkout@v4

      - name: Run Semgrep with community and custom rules
        run: |
          semgrep \
            --config=p/typescript \
            --config=p/nodejs \
            --config=p/react \
            --config=.semgrep/ \
            --error \          # Exit 1 on findings (blocks PR merge)
            --sarif \          # SARIF format for GitHub integration
            --output=semgrep.sarif \
            src/

      - name: Upload SARIF to GitHub Security
        uses: github/codeql-action/upload-sarif@v3
        if: always()
        with:
          sarif_file: semgrep.sarif

Uploading SARIF to GitHub Security means findings appear in the Security → Code Scanning tab and as inline annotations on the diff in PRs, which is the Semgrep equivalent of SonarCloud’s PR decoration.

Performance Considerations

Semgrep is fast on small codebases but can be slow on large ones because it analyzes every file. Tune performance with:

# Skip files that do not need analysis
semgrep --exclude='**/*.min.js' --exclude='**/vendor/**'

# Run only rules relevant to changed files (in CI — requires git diff)
semgrep --config=.semgrep/ $(git diff --name-only origin/main...HEAD | grep '\.ts$')

# Limit rule set to only high-confidence rules for PR checks
# Run the full set in scheduled jobs, not on every push

For large monorepos, split the Semgrep job to run in parallel by package directory.

Key Differences

ConcernRoslyn AnalyzersSemgrepNotes
Rule languageC# code (DiagnosticAnalyzer)YAML pattern rulesSemgrep is dramatically simpler to write
Compilation requiredYes — analyzer is a .NET assemblyNo — YAML is interpretedSemgrep rules can be edited and tested immediately
Pattern matchingRoslyn syntax/symbol APIAST-aware pattern languageDifferent API, similar power
DistributionNuGet packageYAML files in repositorySemgrep rules are version-controlled in your repo
Language supportC# / VB.NET only30+ languagesSame tool scans TS, Python, Go, etc.
IDE integrationNative VS/Rider integrationVS Code extension, JetBrains pluginRoslyn is more deeply integrated
Rule sharingNuGet packageSemgrep registry (semgrep.dev)Different distribution models
Fix suggestionsCode fix providers (CodeFixProvider)fix: field in rule YAMLSemgrep fixes are simpler but functional
Community rulesBuilt into .NET analyzersSemgrep registryBoth have mature community rulesets
CostFree (OSS)Free CLI, paid for team managementFree tier is sufficient for most teams

Gotchas for .NET Engineers

Gotcha 1: Pattern Matching Is Syntactic, Not Semantic

Roslyn Analyzers work at the semantic level. You can query the symbol table, resolve types, check if a method’s return type implements an interface, and traverse the full call graph. This lets you write rules like “warn if a method that returns IDisposable is not disposed in all control flow paths.”

Semgrep works at the syntax/AST level. It matches code patterns, not semantic relationships. This means you cannot ask “is this variable of type X?” or “does this method ever reach a code path that calls Y?” — at least not without Semgrep’s taint analysis mode (Pro tier).

Design your rules around what you can see syntactically:

# This works — pattern matching on syntax
- id: no-setTimeout-in-angular-or-react-components
  pattern: setTimeout($CALLBACK, $DELAY)
  paths:
    include:
      - 'src/components/**'
      - 'src/pages/**'
  message: Use useEffect cleanup or NgZone.runOutsideAngular instead

# This does NOT work — requires semantic knowledge of types
# (Semgrep cannot determine whether $SERVICE is an HttpClient instance)
- id: no-http-in-component  # INVALID — Semgrep cannot resolve this
  pattern: $SERVICE.get($URL)
  # Would need to know that $SERVICE is of type HttpService

Gotcha 2: YAML Indentation Errors Produce Confusing Failures

Semgrep YAML rules are indentation-sensitive. A misaligned pattern-not or an either block with inconsistent indentation will either silently do the wrong thing or fail with a cryptic parse error.

Always validate rules before running them in CI:

# Validate rule syntax
semgrep --validate --config=.semgrep/rules.yml

# Test rules against fixture files (the right way to develop rules)
# See: semgrep --test

Use Semgrep’s Playground at semgrep.dev/playground to write and test rules interactively before committing them. It shows you exactly what code each pattern matches, which is faster than the edit-run-check cycle locally.

Gotcha 3: Community Rules Have False Positives — Review Before Adopting

The Semgrep registry has excellent rules, but “excellent” in the context of a community ruleset means “works for most teams most of the time.” Some rules will fire on patterns in your codebase that are deliberate and correct. Running semgrep --config=p/nodejs --error on an existing project without reviewing each rule first will cause spurious CI failures.

The correct workflow:

  1. Run semgrep --config=p/nodejs (without --error) and review all findings
  2. For each finding: decide if it is a real issue or a false positive
  3. For real issues: fix them
  4. For false positives: either suppress inline with // nosemgrep or exclude the rule from your configuration
  5. Once clean, add --error to the CI invocation
# Run a ruleset and output JSON for systematic review
semgrep --config=p/nodejs --json > findings.json

# Review the rules that fired (to exclude the noisy ones)
cat findings.json | jq '[.results[].check_id] | unique'

Gotcha 4: Semgrep Does Not Replace ESLint or TypeScript Compiler Checks

Semgrep, ESLint, and the TypeScript compiler check different things. They are complementary, not redundant:

  • TypeScript compiler: Type errors, structural type violations, undefined variables
  • ESLint: Style conventions, common antipatterns, React hooks rules, accessibility
  • Semgrep: Security patterns, team-specific conventions, cross-file enforcement

The common mistake is trying to enforce style rules (import ordering, naming conventions) with Semgrep when ESLint has better support for those. Semgrep shines for rules that ESLint cannot express — multi-file patterns, complex condition combinations, and security-specific patterns where the rule needs to be opaque to the developer (to prevent cargo-culting around it).

Gotcha 5: Taint Tracking Requires Pro Tier for Full Effectiveness

Semgrep’s free tier has excellent pattern matching. Its taint analysis — tracing untrusted input through the code to an unsafe sink — requires the Pro (paid) tier. Taint analysis is what lets Semgrep catch:

// User input from the request body flows to a shell command without sanitization
const filename = req.body.filename;
exec(`ls ${filename}`);  // Command injection — taint flows from req.body to exec

In the free tier, you can write a pattern that catches exec( + template literal, but you cannot trace whether the template variable contains user input. For the most impactful security rules, taint analysis is worth the cost. For convention enforcement, the free tier is sufficient.

Hands-On Exercise

Write and deploy three custom Semgrep rules for your project’s conventions.

  1. Create a .semgrep/ directory at your repository root and add a rules.yml file.

  2. Write a rule that prevents console.log in service files (under src/*/services/ or similar) and allows it in test files. Test it against your codebase with semgrep --config=.semgrep/rules.yml src/.

  3. Write a rule that catches missing await on async function calls where the result is discarded. Hint: look at the Semgrep pattern for pattern: $X($...ARGS) combined with context about async calls.

  4. Write a rule that enforces that all NestJS route handlers using @Body() include a validation step. (See the example in the article for the starting point.)

  5. Add the Semgrep GitHub Actions workflow. Run it on a PR branch and verify findings appear as annotations on the diff.

  6. Browse the Semgrep registry at semgrep.dev/r and find three rules from p/nodejs or p/typescript that apply to your codebase. Add them to your CI config.

  7. Introduce a deliberate violation of one of your custom rules (add a console.log to a service), push to a branch, open a PR, and verify the CI job fails with a clear message referencing the rule.

Quick Reference

TaskCommand
Run with community rulessemgrep --config=auto .
Run specific rulesetsemgrep --config=p/typescript .
Run custom rulessemgrep --config=.semgrep/ .
Fail CI on findingssemgrep --config=.semgrep/ --error .
Output SARIFsemgrep --sarif --output=results.sarif .
Validate rule syntaxsemgrep --validate --config=.semgrep/rules.yml
Test rules against fixturessemgrep --test .semgrep/
Suppress at a line// nosemgrep: rule-id
Suppress all rules at line// nosemgrep

Minimal Rule Template

rules:
  - id: rule-id-in-kebab-case
    pattern: |
      the_pattern_to_match(...)
    message: >
      What is wrong and how to fix it. One paragraph.
    languages: [typescript]
    severity: ERROR   # ERROR | WARNING | INFO
    paths:
      include:
        - 'src/**'
      exclude:
        - '**/*.spec.ts'
        - '**/*.test.ts'

Pattern Combination Reference

# AND — all must match
patterns:
  - pattern: A
  - pattern-not: B

# OR — any must match
pattern-either:
  - pattern: A
  - pattern: B

# Context restriction
patterns:
  - pattern: dangerous_call()
  - pattern-inside: |
      function $FUNC(...) { ... }

Severity Levels and CI Behavior

SeverityExit CodePR Impact
ERROR1 (with --error)Blocks merge
WARNING0Annotation only
INFO0Annotation only

Further Reading

Security Best Practices: OWASP Top 10 in the Node.js Context

For .NET engineers who know: OWASP Top 10, ASP.NET Core’s built-in security features (anti-forgery tokens, output encoding, [Authorize]), Entity Framework parameterization, and Data Protection APIs You’ll learn: How each OWASP Top 10 vulnerability manifests differently in Node.js and TypeScript, how our toolchain detects each one, and the mitigation patterns specific to our stack Time: 15-20 min read

The .NET Way (What You Already Know)

ASP.NET Core ships with strong defaults for most OWASP Top 10 mitigations built into the framework. Entity Framework Core parameterizes queries by default. Razor automatically HTML-encodes output. [ValidateAntiForgeryToken] handles CSRF. [Authorize] enforces authentication at the controller or action level. The Data Protection API handles key management for tokens and cookies.

The Node.js ecosystem does not have a single framework with comparable built-in defaults. NestJS provides structure, but security is assembled from npm packages, middleware configuration, and explicit choices. The same vulnerabilities exist; some are harder to trigger accidentally, some are easier.

This article covers the OWASP Top 10 (2021 edition) in the context of our NestJS + Next.js + Vue stack. For each vulnerability: how it manifests in our stack, how the tools from this track detect it, and the mitigation pattern we use.

A01: Broken Access Control

How it manifests: Missing authorization checks on NestJS endpoints, insecure direct object references (IDOR) in API parameters, or client-side-only access control that is bypassed by direct API calls.

Detection: Semgrep can flag routes missing a guard. SonarCloud flags missing authorization checks on sensitive operations.

The .NET equivalent: [Authorize] on controllers, [AllowAnonymous] as an explicit opt-out, and resource-based authorization via IAuthorizationService.

Mitigation in NestJS:

// WRONG — public by default, hoping route obscurity protects it
@Controller('admin')
export class AdminController {
  @Get('users')
  getAllUsers() {
    return this.usersService.findAll();  // Accessible to anyone
  }
}

// CORRECT — guard at controller level, explicit opt-out at method level
@Controller('admin')
@UseGuards(ClerkAuthGuard, AdminRoleGuard)  // Applied to ALL methods in this controller
export class AdminController {
  @Get('users')
  getAllUsers() {
    return this.usersService.findAll();
  }

  @Get('health')
  @Public()  // Explicit opt-out decorator — requires discipline to use correctly
  healthCheck() {
    return { status: 'ok' };
  }
}

IDOR — always verify ownership before returning data:

// WRONG — any authenticated user can read any order by guessing the ID
@Get(':id')
async getOrder(@Param('id') id: string, @CurrentUser() user: User) {
  return this.ordersService.findById(id);
}

// CORRECT — enforce that the requesting user owns this resource
@Get(':id')
async getOrder(@Param('id') id: string, @CurrentUser() user: User) {
  const order = await this.ordersService.findById(id);
  if (!order) throw new NotFoundException();
  // Ownership check — the critical line
  if (order.userId !== user.id) throw new ForbiddenException();
  return order;
}

A02: Cryptographic Failures

How it manifests: Storing passwords with weak hashing (MD5, SHA1), transmitting sensitive data over HTTP, logging secrets, or using JWT with alg: none.

Detection: Snyk flags packages with known weak cryptography. Semgrep rules catch md5, sha1, and alg: none patterns. SonarCloud flags hardcoded secrets and weak algorithms.

Mitigation:

// Password hashing — bcrypt is the standard
import * as bcrypt from 'bcrypt';

const SALT_ROUNDS = 12;  // Not configurable per-environment — fix it

async function hashPassword(plaintext: string): Promise<string> {
  return bcrypt.hash(plaintext, SALT_ROUNDS);
}

async function verifyPassword(plaintext: string, hash: string): Promise<boolean> {
  return bcrypt.compare(plaintext, hash);
}

// WRONG — never use these for passwords
import * as crypto from 'crypto';
const hash = crypto.createHash('md5').update(password).digest('hex');   // No
const hash = crypto.createHash('sha1').update(password).digest('hex');  // No
const hash = crypto.createHash('sha256').update(password).digest('hex'); // No — unsalted

For JWT configuration, always specify the algorithm explicitly:

// WRONG — algorithm confusion attack: attacker changes header to "none" or "HS256"
// when the server expects RS256
jwt.verify(token, publicKey);

// CORRECT
jwt.verify(token, publicKey, { algorithms: ['RS256'] });

// Never accept "none" as an algorithm — it means no signature verification
// This should be enforced by always specifying the algorithm array

A03: Injection

SQL Injection

How it manifests: String concatenation or template literals in raw SQL queries. Prisma’s query builder is parameterized by default — SQL injection through Prisma requires deliberately bypassing it with $queryRaw.

Detection: Semgrep’s p/nodejs ruleset includes SQL injection patterns. SonarCloud’s security hotspot scanner flags template literals in SQL context.

// WRONG — template literal in raw SQL
const userId = req.body.userId;
const result = await prisma.$queryRaw`
  SELECT * FROM users WHERE id = ${userId}
`;
// Wait — this is actually SAFE. Prisma's tagged template sanitizes parameters.

// ACTUALLY WRONG — string concatenation bypasses parameterization
const result = await prisma.$queryRawUnsafe(
  `SELECT * FROM users WHERE id = ${userId}`  // ← SQL injection
);

// CORRECT — always use $queryRaw (tagged template) not $queryRawUnsafe
const result = await prisma.$queryRaw`
  SELECT * FROM users WHERE id = ${userId}
`;

// Or better, use Prisma's typed query builder (no raw SQL at all)
const user = await prisma.user.findUnique({ where: { id: userId } });

NoSQL Injection

This is a Node.js-specific concern with no direct .NET equivalent. MongoDB queries accept JavaScript objects, and if you construct those objects from user input, users can inject MongoDB query operators.

// WRONG — if req.body.username is { "$gt": "" }, this matches all users
const user = await db.collection('users').findOne({
  username: req.body.username,  // User-controlled MongoDB query object
  password: req.body.password,
});

// CORRECT — validate and type the input with Zod before using it
import { z } from 'zod';

const LoginSchema = z.object({
  username: z.string().min(1).max(100),  // Only accepts strings
  password: z.string().min(1),
});

const { username, password } = LoginSchema.parse(req.body);
const user = await db.collection('users').findOne({ username, password });

Command Injection

Node.js’s child_process module is dangerous. There is no safe child_process.exec() with user input — period.

import { exec, execFile } from 'child_process';

// WRONG — shell expansion allows command injection
const filename = req.body.filename;
exec(`convert ${filename} output.png`, callback);
// User sends: filename = "image.jpg; rm -rf /"

// WRONG — exec always runs through the shell
exec(`ffmpeg -i ${userInput} -o output.mp4`);

// CORRECT — execFile does not spawn a shell; user input is passed as arguments
execFile('convert', [filename, 'output.png'], callback);

// CORRECT — validate filename format before use
const SafeFilenameSchema = z.string().regex(/^[\w\-\.]+$/, 'Invalid filename');
const safeFilename = SafeFilenameSchema.parse(req.body.filename);
execFile('convert', [safeFilename, 'output.png'], callback);

// BEST — avoid child_process entirely; use native Node.js packages
// for the task (Sharp for images, FFmpeg bindings, etc.)

Detection: Semgrep’s p/nodejs ruleset flags child_process.exec with non-literal arguments. This is one of the highest-value Semgrep rules to enable.

A04: Insecure Design

How it manifests: Business logic flaws — race conditions, missing rate limiting, predictable resource IDs, or over-privileged API endpoints.

Mitigation patterns:

// Rate limiting — express-rate-limit (NestJS uses this via @nestjs/throttler)
import { ThrottlerModule } from '@nestjs/throttler';

// In AppModule
ThrottlerModule.forRoot({
  ttl: 60,      // Time window in seconds
  limit: 100,   // Max requests per window
}),

// Apply to specific endpoints (e.g., auth routes get tighter limits)
@UseGuards(ThrottlerGuard)
@Throttle({ default: { ttl: 60, limit: 5 } })
@Post('auth/login')
async login(@Body() dto: LoginDto) { ... }

A05: Security Misconfiguration

How it manifests: CORS configured to accept any origin, development error messages in production, missing security headers, or default credentials left in place.

Detection: Snyk flags packages with known misconfiguration issues. Semgrep rules catch cors({ origin: '*' }) patterns.

Mitigation:

// src/main.ts — production security configuration
async function bootstrap() {
  const app = await NestFactory.create(AppModule);

  // Helmet — sets security headers (equivalent to adding security middleware in ASP.NET)
  // X-Content-Type-Options, X-Frame-Options, Strict-Transport-Security, etc.
  app.use(helmet());

  // CORS — restrict to known origins, never '*' in production
  app.enableCors({
    origin: process.env.ALLOWED_ORIGINS?.split(',') ?? ['http://localhost:3000'],
    methods: ['GET', 'POST', 'PUT', 'PATCH', 'DELETE'],
    credentials: true,   // Required for cookies / Authorization headers
  });

  // Never expose detailed errors in production
  // NestJS's built-in exception filter omits stack traces in production
  // Verify: NODE_ENV=production in your deployment environment
}
# Install helmet — security headers for Express/NestJS
pnpm add helmet

A06: Vulnerable and Outdated Components

This is covered in detail in Article 7.3 (Snyk). The summary:

  • snyk test --severity-threshold=high in CI blocks deployments on high/critical vulnerabilities
  • snyk monitor provides continuous monitoring for new CVEs against your deployed dependency snapshot
  • Lock files (pnpm-lock.yaml) committed to the repository ensure deterministic installs
  • Scheduled weekly Snyk scans catch new CVEs on unchanged dependency trees

A07: Identification and Authentication Failures

How it manifests in Node.js: JWT algorithm confusion, missing token expiry validation, weak session secrets, token storage in localStorage (XSS-accessible), or missing brute-force protection.

Detection: Semgrep’s p/jwt ruleset catches algorithm confusion patterns. Snyk flags known-vulnerable versions of JWT libraries.

JWT pitfalls specific to Node.js:

// WRONG — algorithm confusion: attacker can change "alg" in JWT header
const decoded = jwt.verify(token, secret);

// WRONG — accepting multiple algorithms includes the dangerous ones
const decoded = jwt.verify(token, secret, { algorithms: ['HS256', 'RS256'] });
// If the server uses RS256 and you include HS256, an attacker can sign a
// token with the public key (which is, by definition, public) using HS256.

// CORRECT — exactly one algorithm, matched to your key type
const decoded = jwt.verify(token, publicKey, { algorithms: ['RS256'] });

// WRONG — no expiry check (exp claim ignored if not specified in options)
const decoded = jwt.decode(token);  // decode() does NOT verify — common confusion

// CORRECT — verify() checks signature AND standard claims (exp, iat, etc.)
const decoded = jwt.verify(token, secret, {
  algorithms: ['HS256'],
  issuer: 'your-app',
  audience: 'your-users',
});

We use Clerk for authentication — the above applies if you are implementing custom JWT handling. With Clerk, authentication is handled by @clerk/nextjs and @clerk/express — you verify sessions with the Clerk SDK, not raw JWT operations. Do not reimplement authentication from scratch if Clerk covers the use case.

Token storage:

  • Clerk uses HttpOnly cookies for session tokens — inaccessible to JavaScript, protected against XSS
  • If you manage your own JWTs, store them in HttpOnly cookies, not localStorage
  • localStorage is accessible to any JavaScript on the page — a single XSS vulnerability leaks all tokens

A08: Software and Data Integrity Failures

How it manifests: npm supply chain attacks, CI/CD pipeline compromise, deserialization of untrusted data without validation.

Mitigation:

# Always use frozen lockfile installs in CI — never install from package.json directly
pnpm install --frozen-lockfile  # pnpm
npm ci                          # npm

# Enable package provenance verification (npm v9+, pnpm v8+)
npm config set audit-level=high
// WRONG — deserializing unknown data without validation
const data = JSON.parse(req.body.payload);  // Trusting user-provided JSON structure

// CORRECT — validate with Zod after parsing
import { z } from 'zod';

const WebhookPayloadSchema = z.object({
  event: z.string(),
  data: z.object({
    id: z.string().uuid(),
    amount: z.number().positive(),
  }),
});

const payload = WebhookPayloadSchema.parse(JSON.parse(req.body.payload));

A09: Security Logging and Monitoring Failures

How it manifests: Exceptions swallowed without logging, Sentry not configured, error messages that leak implementation details, or logs containing sensitive data.

Detection: Sentry (Article 7.1) handles error tracking. SonarCloud flags caught exceptions with empty catch blocks.

// WRONG — swallowing exceptions silently
try {
  await processPayment(dto);
} catch {
  // Nothing logged — this payment failure disappears completely
}

// WRONG — logging sensitive data
this.logger.error('Payment failed', {
  cardNumber: dto.cardNumber,  // PCI violation
  cvv: dto.cvv,
});

// CORRECT — log the error without sensitive data, capture to Sentry
try {
  await processPayment(dto);
} catch (error) {
  this.logger.error('Payment processing failed', {
    orderId: dto.orderId,
    // No card data — log only non-sensitive context
  });
  Sentry.captureException(error, { extra: { orderId: dto.orderId } });
  throw new InternalServerErrorException('Payment processing failed');
}

A10: Server-Side Request Forgery (SSRF)

How it manifests: Node.js’s fetch or axios called with user-controlled URLs. If your server fetches a URL provided by the user, an attacker can point it at internal services, AWS metadata endpoints, or other resources the server can reach but the user cannot.

This is a Node.js concern that rarely appears in .NET enterprise applications because most .NET apps use HttpClient with strongly-typed API clients. Node.js developers write fetch(userInput) more casually.

Detection: Semgrep rules that flag fetch($URL) or axios.get($URL) where $URL derives from request parameters.

// WRONG — fetch with user-controlled URL
@Post('preview')
async fetchPreview(@Body() dto: { url: string }) {
  const response = await fetch(dto.url);  // SSRF — user controls the target
  return response.text();
}

// Attacker sends: { "url": "http://169.254.169.254/latest/meta-data/" }
// Server fetches the AWS metadata endpoint with the instance's credentials

// CORRECT — validate URL against an allowlist
import { z } from 'zod';

const AllowedOrigins = ['https://api.trusted-partner.com', 'https://cdn.example.com'];

const PreviewSchema = z.object({
  url: z.string().url().refine(
    (url) => AllowedOrigins.some((origin) => url.startsWith(origin)),
    { message: 'URL must be from an allowed origin' }
  ),
});

@Post('preview')
async fetchPreview(@Body() dto: { url: string }) {
  const { url } = PreviewSchema.parse(dto);
  const response = await fetch(url);
  return response.text();
}

For more complex cases, block private IP ranges:

import { isIPv4 } from 'net';
import dns from 'dns/promises';

async function isSafeToFetch(urlString: string): Promise<boolean> {
  const url = new URL(urlString);

  // Block non-HTTP schemes
  if (!['http:', 'https:'].includes(url.protocol)) return false;

  // Resolve hostname and check for private IPs
  const addresses = await dns.lookup(url.hostname, { all: true });
  for (const addr of addresses) {
    if (isPrivateIp(addr.address)) return false;
  }

  return true;
}

function isPrivateIp(ip: string): boolean {
  return (
    ip.startsWith('10.') ||
    ip.startsWith('172.16.') ||
    ip.startsWith('192.168.') ||
    ip === '127.0.0.1' ||
    ip === '::1' ||
    ip.startsWith('169.254.')   // AWS metadata
  );
}

XSS: Special Attention

XSS deserves expanded treatment because the behavior differs significantly between React, Vue, and server-rendered HTML.

React: Auto-Escaping Is Your Default Protection

React escapes all dynamic content by default. This is the significant improvement over early web development:

// SAFE — React escapes this automatically
const UserProfile = ({ displayName }: { displayName: string }) => (
  <div>Hello, {displayName}</div>  // Rendered as text, not HTML
);

// UNSAFE — bypasses React's escaping
const RichContent = ({ html }: { html: string }) => (
  <div dangerouslySetInnerHTML={{ __html: html }} />  // XSS if html is from user input
);

// CORRECT — sanitize before using dangerouslySetInnerHTML
import DOMPurify from 'dompurify';

const RichContent = ({ html }: { html: string }) => (
  <div dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize(html) }} />
);
# Install DOMPurify — the standard XSS sanitizer for the browser
pnpm add dompurify
pnpm add -D @types/dompurify

Detection: Semgrep’s p/react ruleset includes rules that flag dangerouslySetInnerHTML without DOMPurify.sanitize().

Vue: Similar Pattern with Different API

<!-- SAFE — auto-escaped -->
<template>
  <div>{{ userContent }}</div>
</template>

<!-- UNSAFE — v-html is the Vue equivalent of dangerouslySetInnerHTML -->
<template>
  <div v-html="userContent"></div>  <!-- XSS if userContent is from user input -->
</template>

<!-- CORRECT — sanitize before binding -->
<template>
  <div v-html="sanitizedContent"></div>
</template>

<script setup lang="ts">
import DOMPurify from 'dompurify';

const props = defineProps<{ userContent: string }>();
const sanitizedContent = computed(() => DOMPurify.sanitize(props.userContent));
</script>

Server-Side XSS (Next.js)

Next.js server components and API routes that return HTML content:

// WRONG — setting HTML response directly with user input
// (This pattern is rare in React apps but appears in API routes)
res.setHeader('Content-Type', 'text/html');
res.send(`<div>${req.query.name}</div>`);  // XSS

// CORRECT — use a template that escapes, or return JSON
res.json({ name: req.query.name });  // JSON-encoded, safe

CSRF: SameSite Cookies vs. Anti-Forgery Tokens

The .NET way: [ValidateAntiForgeryToken] with @Html.AntiForgeryToken(). The framework generates a synchronizer token pair and validates it on state-changing requests.

The Node.js way: SameSite cookie attributes largely replace token-based CSRF protection for modern browsers. If you are using HttpOnly; Secure; SameSite=Strict or SameSite=Lax cookies for session management, CSRF is mitigated for the vast majority of attack scenarios.

// Session cookie configuration in NestJS (using express-session or similar)
app.use(
  session({
    cookie: {
      httpOnly: true,        // Prevents XSS access
      secure: process.env.NODE_ENV === 'production',  // HTTPS only in prod
      sameSite: 'lax',       // Prevents CSRF for most cases
      maxAge: 24 * 60 * 60 * 1000,  // 24 hours
    },
    secret: process.env.SESSION_SECRET!,
    resave: false,
    saveUninitialized: false,
  })
);

If you use token-based auth (JWT in Authorization header, not cookies), CSRF is not a concern — cross-site requests cannot include the Authorization header due to CORS policy.

Clerk manages session cookies with appropriate security attributes by default — you do not need to configure this manually when using Clerk.

Quick Reference: Vulnerability → Mitigation → Detection

OWASP CategoryNode.js RiskMitigationTool Detection
A01: Broken Access ControlMissing route guards, IDORGuards on all controllers, ownership checksSemgrep custom rules
A02: Cryptographic FailuresWeak hashing, JWT alg: nonebcrypt for passwords, specify JWT algorithmSemgrep p/jwt, Snyk
A03: Injection — SQL$queryRawUnsafe with user inputUse Prisma query builder or $queryRawSemgrep p/nodejs
A03: Injection — NoSQLObject injection into MongoDB queriesZod schema validation before querySemgrep custom rules
A03: Injection — Commandexec() with user inputexecFile() + input validationSemgrep p/nodejs
A05: Misconfigurationcors({ origin: '*' }), no security headersHelmet, restrict CORS originsSemgrep, SonarCloud
A06: Vulnerable ComponentsTransitive npm vulnerabilitiesSnyk CI scan, --frozen-lockfileSnyk
A07: Auth FailuresJWT algorithm confusion, localStorage token storageSingle algorithm, HttpOnly cookiesSemgrep p/jwt
A08: Data IntegrityMissing lockfile, deserialization without validationnpm ci / --frozen-lockfile, Zod parsingSnyk monitor
A09: Logging FailuresSilent catch blocks, logging secretsStructured logging, Sentry, no PII in logsSonarCloud
A10: SSRFfetch(userInput)URL allowlist, private IP blockingSemgrep custom rules
XSS (React)dangerouslySetInnerHTMLDOMPurify.sanitize()Semgrep p/react
XSS (Vue)v-html with user contentDOMPurify.sanitize() in computedSemgrep custom rules
CSRFCookie-based sessionsSameSite=Lax cookiesManual review

Hands-On Exercise

Conduct a security audit of one NestJS API module using the OWASP Top 10 as a checklist.

  1. Pick a resource module (e.g., orders, users, or any existing CRUD module).

  2. Check A01: Does every route handler have an auth guard? Is there an ownership check before returning records? Try calling an endpoint with a different user’s resource ID.

  3. Check A03: Find every place a database query is constructed. Are any using string concatenation or $queryRawUnsafe? Find every use of child_process — does any receive user input?

  4. Check A05: Run curl -I http://localhost:3000/api/health and inspect response headers. Are security headers present (X-Content-Type-Options, X-Frame-Options)? Install helmet if missing.

  5. Check A07: Find your JWT validation code. Does it specify an explicit algorithm? Is the exp claim validated? Are tokens stored in HttpOnly cookies or localStorage?

  6. Check A10: Search for fetch( and axios.get( calls where the URL is not a literal string. Is any URL value derived from request data?

  7. Run Semgrep with the security-focused rulesets and compare findings to your manual audit:

    semgrep --config=p/nodejs --config=p/react --config=p/owasp-top-ten src/
    

Further Reading

Secrets Management: From User Secrets to .env and Render

For .NET engineers who know: dotnet user-secrets, appsettings.json, IConfiguration, Azure Key Vault references, and environment-specific config transforms You’ll learn: How the Node.js ecosystem handles secrets at each stage (local dev, CI, production), how to validate env vars at startup with Zod, and our team’s conventions for .env files and Render secret management Time: 10-15 min read

The .NET Way (What You Already Know)

The .NET configuration system is layered. appsettings.json holds non-sensitive defaults. appsettings.Development.json holds dev overrides and is excluded from production deployments. dotnet user-secrets stores sensitive values outside the project directory (in ~/.microsoft/usersecrets/) so they never touch the filesystem that git tracks. In production, Azure Key Vault references in appsettings.json tell the app to read the actual value from Key Vault at runtime.

// Program.cs — configuration layer setup (mostly automatic)
var builder = WebApplication.CreateBuilder(args);
// Layer order (later layers override earlier ones):
// 1. appsettings.json
// 2. appsettings.{Environment}.json
// 3. User Secrets (Development only)
// 4. Environment variables
// 5. Command-line args

// User Secrets — stored in ~/.microsoft/usersecrets/{project-guid}/secrets.json
// Activated by: dotnet user-secrets set "ConnectionStrings:Default" "Server=..."
# .NET User Secrets workflow
dotnet user-secrets init
dotnet user-secrets set "ConnectionStrings:Default" "Server=localhost;Database=mydb;"
dotnet user-secrets set "Stripe:SecretKey" "sk_test_..."
dotnet user-secrets list

The key insight is that secrets never live in the repository. The repository contains the configuration schema (key names without values), and the actual values are stored separately per environment.

The Node.js Way

Node.js uses .env files for local development and environment variables for every other environment. The concept is identical to .NET; the tooling is different.

The dotenv package (or its Vite/Next.js built-in equivalent) reads a .env file and populates process.env at startup — the equivalent of IConfiguration with User Secrets as the source.

The .env File System

# .env — local development values
# This file NEVER gets committed to git
DATABASE_URL="postgresql://postgres:password@localhost:5432/myapp_dev"
STRIPE_SECRET_KEY="sk_test_51..."
CLERK_SECRET_KEY="sk_test_..."
JWT_SECRET="a-long-random-string-for-local-development-only"
SENTRY_DSN="https://abc123@sentry.io/456"
NODE_ENV="development"
PORT="3000"
ALLOWED_ORIGINS="http://localhost:5173,http://localhost:3000"

# .env.example — committed to git, documents all required variables
# Contains placeholder values or descriptions, never real secrets
DATABASE_URL="postgresql://user:password@localhost:5432/dbname"
STRIPE_SECRET_KEY="sk_test_..."  # Get from Stripe dashboard → Developers → API Keys
CLERK_SECRET_KEY="sk_test_..."   # Get from Clerk dashboard → API Keys
JWT_SECRET=""                     # Generate: openssl rand -base64 32
SENTRY_DSN=""                     # Get from Sentry project settings → Client Keys
NODE_ENV="development"
PORT="3000"
ALLOWED_ORIGINS="http://localhost:3000"

.env.example is the Node.js equivalent of appsettings.json as a schema document — it tells every developer exactly which variables the application needs, without containing any real values.

Setting Up dotenv

In NestJS, @nestjs/config wraps dotenv with a clean API:

pnpm add @nestjs/config
// src/app.module.ts
import { Module } from '@nestjs/common';
import { ConfigModule } from '@nestjs/config';

@Module({
  imports: [
    ConfigModule.forRoot({
      isGlobal: true,     // Available in every module without importing ConfigModule again
      envFilePath: '.env',
      // In production, environment variables come from the platform (Render, Vercel, etc.)
      // and the .env file does not exist — that is correct behavior
    }),
  ],
})
export class AppModule {}
// Anywhere in the app — use ConfigService instead of process.env directly
import { Injectable } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';

@Injectable()
export class PaymentService {
  private readonly stripeKey: string;

  constructor(private readonly config: ConfigService) {
    this.stripeKey = this.config.getOrThrow<string>('STRIPE_SECRET_KEY');
    // getOrThrow() fails at construction time if the variable is missing
    // This is the equivalent of IConfiguration's null-propagation-safe access
  }
}

For Vite-based frontends (Vue, React without Next.js), use the built-in env variable system:

# .env — frontend variables MUST be prefixed with VITE_ to be accessible in the browser
VITE_API_URL="http://localhost:3000/api"
VITE_CLERK_PUBLISHABLE_KEY="pk_test_..."
VITE_SENTRY_DSN="https://..."

# Non-prefixed variables are only available during the build process (build scripts)
# They are NOT embedded in the browser bundle
INTERNAL_BUILD_VARIABLE="value"  # Not accessible via import.meta.env in browser code
// Accessing frontend env variables
const apiUrl = import.meta.env.VITE_API_URL;   // Correct — Vite-specific syntax
const apiUrl = process.env.VITE_API_URL;        // Wrong — does not work in Vite

For Next.js:

# .env.local — the Next.js equivalent of .env for local development
DATABASE_URL="postgresql://..."
STRIPE_SECRET_KEY="sk_test_..."

# Variables prefixed with NEXT_PUBLIC_ are embedded in the browser bundle
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY="pk_test_..."
NEXT_PUBLIC_API_URL="http://localhost:3000"

Validating Environment Variables at Startup with Zod

This is the most valuable pattern in this article. In .NET, you typically discover missing configuration at runtime when the first request fails with a null reference. Zod validation at startup fails immediately with a clear error listing every missing variable.

// src/config/env.validation.ts
import { z } from 'zod';

const EnvSchema = z.object({
  // Required in all environments
  NODE_ENV: z.enum(['development', 'staging', 'production']),
  PORT: z.coerce.number().int().positive().default(3000),

  // Database
  DATABASE_URL: z.string().url(),

  // Authentication (Clerk)
  CLERK_SECRET_KEY: z.string().startsWith('sk_'),
  CLERK_PUBLISHABLE_KEY: z.string().startsWith('pk_'),

  // Payments
  STRIPE_SECRET_KEY: z.string().startsWith('sk_'),
  STRIPE_WEBHOOK_SECRET: z.string().startsWith('whsec_').optional(),

  // Error tracking
  SENTRY_DSN: z.string().url().optional(),
  SENTRY_RELEASE: z.string().optional(),

  // CORS
  ALLOWED_ORIGINS: z
    .string()
    .transform((val) => val.split(',').map((s) => s.trim())),
});

// Infer TypeScript type from schema
export type Env = z.infer<typeof EnvSchema>;

// Validate at module load time — throws immediately if invalid
export function validateEnv(config: Record<string, unknown>): Env {
  const result = EnvSchema.safeParse(config);

  if (!result.success) {
    console.error('Invalid environment configuration:');
    console.error(result.error.format());
    process.exit(1);  // Hard fail — do not start the app with missing config
  }

  return result.data;
}
// src/app.module.ts — wire Zod validation into NestJS config
import { ConfigModule } from '@nestjs/config';
import { validateEnv } from './config/env.validation';

@Module({
  imports: [
    ConfigModule.forRoot({
      isGlobal: true,
      validate: validateEnv,  // Called by ConfigModule on startup
    }),
  ],
})
export class AppModule {}

When a variable is missing, startup fails with:

Invalid environment configuration:
{
  DATABASE_URL: { _errors: ['Required'] },
  STRIPE_SECRET_KEY: { _errors: ['Required'] }
}

This is dramatically better than discovering a missing STRIPE_SECRET_KEY when the first payment request arrives in production.

The .gitignore Rule

# .gitignore
# NEVER commit .env files containing real values
.env
.env.local
.env.*.local
.env.development.local
.env.test.local
.env.production.local

# DO commit:
# .env.example — documents required variables without values
# .env.test — test-specific values that are safe to commit (non-secret, test credentials)

The correct files to commit:

FileCommit?Purpose
.envNeverLocal development secrets
.env.localNeverLocal developer overrides
.env.exampleAlwaysDocuments all required variables
.env.testUsuallyNon-secret test values (can commit if no real creds)
.env.productionNeverDo not create this file — use platform env vars

Recovering from an Accidental .env Commit

If a .env file with real secrets is committed to git:

# IMMEDIATELY rotate every secret in the file — before anything else.
# The secret is compromised from the moment it was pushed.
# Rotation cannot wait.

# Remove from git history (but the secret is already compromised)
git filter-repo --path .env --invert-paths
# Or for a single commit: git rm --cached .env && git commit -m "Remove accidentally committed .env"

# Force push (you must -- this is one of the few legitimate cases)
git push origin main --force

# Notify the team — anyone who cloned after the commit has the secret locally

The rotation is the critical step. Do not spend time removing the file from history before rotating the secrets — assume the secrets are already compromised the moment they hit GitHub.

Add a Semgrep rule or pre-commit hook to prevent this:

# Install pre-commit hooks with Husky
pnpm add -D husky lint-staged

# Add to package.json
{
  "scripts": {
    "prepare": "husky install"
  }
}

# .husky/pre-commit
#!/bin/sh
# Prevent committing .env files with real-looking secrets
if git diff --cached --name-only | grep -E '^\.env$|^\.env\.' | grep -v '\.example$'; then
  echo "ERROR: Attempting to commit an .env file."
  echo "If this is .env.example with no secrets, rename it and try again."
  exit 1
fi

Alternatively, configure git-secrets or Gitleaks as a pre-commit hook for broader secret pattern detection.

Render: Environment Variables in Production

Our staging and production deployments run on Render. Environment variables in Render are configured per-service in the Render dashboard under Service → Environment.

Render provides two variable types:

TypeBehaviorUse For
Environment VariableStored in Render, injected at runtimeNon-secret config values
Secret FileStored encrypted, mounted as a fileRarely needed; prefer environment variables

For sensitive values, Render’s environment variables are encrypted at rest and only visible to service administrators — this is the production equivalent of Azure Key Vault or User Secrets.

Our workflow:

  1. All variables in .env.example become environment variables in Render
  2. Values for staging come from staging service accounts (Stripe test keys, Clerk test instance, etc.)
  3. Values for production come from production service accounts
  4. Secret values are never put in render.yaml (Render’s infrastructure-as-code file) — render.yaml is committed to git
# render.yaml — infrastructure as code for Render
# This IS committed to git — never put secrets here
services:
  - type: web
    name: my-api
    runtime: node
    buildCommand: pnpm install --frozen-lockfile && pnpm build
    startCommand: node dist/main.js
    envVars:
      - key: NODE_ENV
        value: production     # Safe to commit — not a secret
      - key: PORT
        value: 3000
      - key: DATABASE_URL
        fromDatabase:
          name: my-postgres-db
          property: connectionString   # Render injects this from the managed DB
      # Secrets are added in the Render dashboard, not here:
      # STRIPE_SECRET_KEY — set in Render dashboard
      # CLERK_SECRET_KEY — set in Render dashboard
      # SENTRY_DSN — set in Render dashboard

Rotation Strategy

Secrets should be rotatable without downtime. The operational pattern:

  1. Generate the new secret value
  2. Add it to Render as a new environment variable (e.g., JWT_SECRET_NEW)
  3. Update the application to accept both old and new values during transition
  4. Deploy the application
  5. Remove the old variable from Render
  6. Deploy again to remove dual-acceptance logic

For most secrets (API keys, DSNs), rotation does not require dual-acceptance — the new key replaces the old one with a single deployment and a brief window where in-flight requests using the old key may fail. Plan rotations during low-traffic periods.

For JWT signing keys specifically, dual-acceptance matters — a user’s existing token was signed with the old key, and you cannot invalidate all sessions instantly. Implement JWKS (JSON Web Key Sets) with key versioning, or accept short-lived tokens so natural expiry handles the transition.

Clerk handles JWT key rotation automatically — this is one of the benefits of delegating authentication.

Key Differences

Concern.NETNode.js / Our StackNotes
Local secrets storedotnet user-secrets (outside project dir).env file (in project dir, gitignored)Both keep secrets out of git
Secret formatJSON key hierarchyFlat KEY=VALUE pairsNode.js uses __ for nesting: DB__HOST
Config validationRuntime (first use)Startup (Zod validation)Zod validation is explicitly better
Production secretsAzure Key VaultRender environment variablesPlatform-managed in both cases
Config providerIConfigurationConfigService (NestJS) / process.envSimilar API
Secret hierarchyJSON hierarchy with : separatorFlat with __ for nestingConfiguration["Stripe:SecretKey"]STRIPE__SECRET_KEY
Frontend secretsN/A (server-only)VITE_ prefix = public bundle, no prefix = build-onlyPublic keys only — no private keys in frontend bundles
Accidental commit recoverygit filter-branch + secret rotationSame process — rotation is mandatoryRotation always takes priority over history rewriting

Gotchas for .NET Engineers

Gotcha 1: .env Files Are Not Hierarchical — Everything Is Flat

appsettings.json is hierarchical: { "Stripe": { "SecretKey": "...", "WebhookSecret": "..." } }. .env files are flat: STRIPE_SECRET_KEY=... and STRIPE_WEBHOOK_SECRET=....

When mapping .NET configuration keys to env variable names:

  • Replace : with __ (double underscore) for hierarchy: Stripe:SecretKeySTRIPE__SECRET_KEY
  • Conventionally, all env var names are UPPER_SNAKE_CASE

@nestjs/config handles this automatically — it parses __ as a hierarchy separator when you use ConfigService.get('stripe.secretKey') with a custom configuration factory.

Gotcha 2: VITE_ and NEXT_PUBLIC_ Prefixes Embed Values in the Browser Bundle

Variables prefixed with VITE_ (Vite) or NEXT_PUBLIC_ (Next.js) are embedded in the compiled JavaScript bundle that ships to users’ browsers. Anyone can read them with browser developer tools or by decompiling the bundle.

This is intentional and correct for public keys (Clerk publishable key, Stripe publishable key, Sentry DSN, public API URL). It is catastrophically wrong for secret keys.

The rule: if a variable name starts with VITE_ or NEXT_PUBLIC_, it is public. Secret keys — Stripe secret key, Clerk secret key, database connection strings — must never have these prefixes. They belong on the server only.

# SAFE — publishable key is designed to be public
VITE_CLERK_PUBLISHABLE_KEY="pk_live_..."

# CATASTROPHIC — secret key exposed in browser bundle
VITE_CLERK_SECRET_KEY="sk_live_..."    # Never do this
VITE_DATABASE_URL="postgresql://..."   # Never do this

Gotcha 3: Missing Variables Are Silent Without Validation

Without Zod validation, process.env.STRIPE_SECRET_KEY returns undefined if the variable is not set. TypeScript types process.env as NodeJS.ProcessEnv, where every property is string | undefined. If you access it without checking:

// TypeScript is wrong here — it should require you to handle undefined
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY, { apiVersion: '2024-04-10' });
// process.env.STRIPE_SECRET_KEY is undefined in CI — Stripe constructor fails at runtime

The Zod validation pattern at startup converts this from a runtime failure (discovered during a payment attempt) to a startup failure (discovered immediately when the service boots). Always validate env vars at startup.

Gotcha 4: Never Use process.env Directly in Application Code — Go Through ConfigService

Accessing process.env directly throughout your code makes it impossible to test (you cannot inject mock values) and makes it hard to find all places where a variable is used.

// WRONG — scattered process.env access
@Injectable()
export class StripeService {
  createPaymentIntent() {
    const stripe = new Stripe(process.env.STRIPE_SECRET_KEY!);  // Hard to test
  }
}

// CORRECT — inject ConfigService, testable with a mock
@Injectable()
export class StripeService {
  private readonly stripe: Stripe;

  constructor(private readonly config: ConfigService) {
    this.stripe = new Stripe(this.config.getOrThrow('STRIPE_SECRET_KEY'));
  }
}

In tests, provide a ConfigService mock that returns test values, avoiding any need for .env files during testing.

Hands-On Exercise

Establish complete secrets management for your project.

  1. Create .env.example documenting every environment variable your application uses. Include a comment for each variable explaining where to get the value.

  2. If you do not have a .env file, create one from .env.example and fill in real local development values. Verify .env is in .gitignore. Run git status to confirm it is not tracked.

  3. Add Zod env validation. Create src/config/env.validation.ts using the pattern from this article. Wire it into ConfigModule.forRoot({ validate: validateEnv }). Remove a required variable from your .env and verify the application refuses to start with a clear error message. Restore the variable.

  4. Search your codebase for direct process.env access: grep -r "process\.env\." src/. For each occurrence, determine if it should go through ConfigService instead.

  5. Search for any hardcoded credentials or secrets: run semgrep --config=p/secrets . and review all findings.

  6. In Render (or your deployment platform), add all required environment variables from .env.example. Verify that the staging deployment reads them correctly by checking your Sentry/logging output on startup.

  7. Set up a pre-commit hook that prevents committing .env files: add the Husky configuration from this article and test it by attempting git add .env.

Quick Reference

TaskCommand / Config
Create .env from examplecp .env.example .env
Install NestJS configpnpm add @nestjs/config
Install dotenv (standalone)pnpm add dotenv
Load .env in NestJSConfigModule.forRoot({ isGlobal: true })
Access config valueconfigService.getOrThrow<string>('KEY')
Validate env at startupConfigModule.forRoot({ validate: validateEnvFn })
Find env usage in codegrep -r "process\.env\." src/
Generate random secretopenssl rand -base64 32
Check for secrets in codesemgrep --config=p/secrets .

.env.example Template

# Application
NODE_ENV=development
PORT=3000

# Database (PostgreSQL)
DATABASE_URL=postgresql://user:password@localhost:5432/dbname

# Authentication (Clerk — https://dashboard.clerk.com)
CLERK_SECRET_KEY=sk_test_...
CLERK_PUBLISHABLE_KEY=pk_test_...

# Payments (Stripe — https://dashboard.stripe.com/test/apikeys)
STRIPE_SECRET_KEY=sk_test_...
STRIPE_WEBHOOK_SECRET=whsec_...

# Error tracking (Sentry — https://sentry.io/settings/your-org/projects/your-project/keys/)
SENTRY_DSN=
SENTRY_RELEASE=

# CORS — comma-separated list of allowed frontend origins
ALLOWED_ORIGINS=http://localhost:5173,http://localhost:3000

Zod Env Validation Starter

import { z } from 'zod';

const EnvSchema = z.object({
  NODE_ENV: z.enum(['development', 'staging', 'production']),
  PORT: z.coerce.number().default(3000),
  DATABASE_URL: z.string().url(),
  // Add your variables here
});

export type Env = z.infer<typeof EnvSchema>;

export function validateEnv(config: Record<string, unknown>): Env {
  const result = EnvSchema.safeParse(config);
  if (!result.success) {
    console.error('Invalid environment:', result.error.format());
    process.exit(1);
  }
  return result.data;
}

Env Variable Prefix Rules

PrefixVisible InUse For
VITE_Browser bundlePublic API keys (Clerk publishable, Stripe publishable)
NEXT_PUBLIC_Browser bundleSame as above, for Next.js
(no prefix)Server onlySecret keys, database URLs, private API keys

Further Reading

Claude Code: AI-Powered Development Workflow

For .NET engineers who know: Visual Studio IntelliSense, GitHub Copilot, and the general concept of AI-assisted code completion You’ll learn: How Claude Code — an autonomous terminal-based AI agent — differs from completion-based tools, and how to integrate it into daily development work Time: 15-20 min read

The .NET Way (What You Already Know)

GitHub Copilot inside Visual Studio or VS Code is the standard AI coding assistant in the .NET ecosystem. The model is familiar: you write code, Copilot suggests the next line or block, you accept or reject. IntelliSense fills in member names and signatures. ReSharper or the Roslyn analyzer gives you quick-fixes. Each of these is a reactive tool — it responds to your cursor position and suggests local completions.

When you need something bigger — generating a whole service class, understanding an unfamiliar codebase, writing a migration script — you probably switch to a chat interface (GitHub Copilot Chat, ChatGPT, or similar), copy-paste relevant code in, get a response, and copy-paste it back out. The workflow is manual. The AI has no direct access to your project. You are the bridge between the AI and the code.

This is the model Claude Code replaces.

The Claude Code Way

Claude Code is a terminal-based AI agent. You run it with claude from any directory, and it operates with direct access to your file system, your shell, and your project. It can read any file, write any file, run any command, and chain those operations across multiple steps — all within a single conversation.

The distinction from Copilot is architectural: Copilot is a completion engine. Claude Code is an agent. The difference matters in practice:

Copilot (completion)Claude Code (agent)
Suggests the next lineReads your whole project, then writes the feature
Operates on one file at a timeCan refactor across 20 files simultaneously
You explain context manuallyIt reads the codebase and figures out context itself
Accepts/rejects suggestionsYou review diffs of completed work
No awareness of tests or CICan run tests, fix failures, iterate
No awareness of Git stateCan commit, create PRs, interpret diffs

Installation and Setup

# Install Claude Code globally via npm
npm install -g @anthropic/claude-code

# Verify the installation
claude --version

# Start Claude Code in your project directory
cd /path/to/your/project
claude

After installation, Claude Code will prompt you to authenticate with your Anthropic account the first time you run it. Once authenticated, your credentials are stored and you do not need to authenticate again per session.

# Start Claude Code (opens an interactive session)
claude

# Start with an initial instruction
claude "explain how authentication works in this codebase"

# Run a one-shot task without entering interactive mode
claude -p "add JSDoc comments to all exported functions in src/services/"

The claude command opens an interactive REPL in your terminal. You type instructions, Claude Code reads files, writes code, runs commands, and reports results. You review the work and give further instructions.

The Conversation Model

Claude Code maintains a context window across a session. As you work, the context accumulates: files Claude Code has read, commands it has run, code it has written. This is different from a stateless chat interface — Claude Code remembers what it did five exchanges ago in the same session.

Practical implications:

  • Start a new session (/clear or exit and restart) when switching to a completely different task. Accumulated context from a database migration task will confuse a UI component task.
  • For large codebases, be explicit about scope. “Look at the auth module” is better than “look at everything.”
  • Claude Code can request to read files, but it reads what you allow. On first use in a codebase, it will scan for a CLAUDE.md file (covered below) and use it as persistent context.

Common Workflows

Code Generation

The most common use case is generating new code that fits your existing patterns:

> Add a POST /api/invoices/:id/send endpoint. It should mark the invoice as sent,
  queue an email notification using our existing queue pattern (look at how
  /api/orders/:id/confirm works), and return the updated invoice.

Claude Code will:

  1. Read the existing confirm endpoint and surrounding code
  2. Understand the queue pattern from context
  3. Write the new endpoint, service method, and any necessary types
  4. Show you the diff before writing anything (configurable)

The key to good code generation is specificity. “Add a send endpoint” produces generic code. “Add a send endpoint following the confirm pattern” produces code that fits.

Debugging

Claude Code can investigate bugs across the full call stack without you copying stack traces into a chat:

> The POST /api/payments/webhook is returning 500 intermittently.
  The Sentry error id is PAYMENT-4521. Can you look at the handler,
  the relevant service code, and our Stripe webhook documentation
  link in the README, and tell me what's likely wrong?

Claude Code reads the handler, traverses to the service, checks the README, and provides an analysis. It can then propose a fix and apply it.

Refactoring

Large-scale refactors that would be tedious to do manually are where Claude Code earns its keep:

> We need to rename UserDto to UserResponseDto across the entire codebase.
  Find every usage, update the types, the imports, the controller return types,
  and the test assertions. Do not change the database schema or the API response
  shape — only the internal TypeScript name.

For a refactor touching 30 files, this is the difference between 20 minutes of careful search-and-replace and a few seconds of delegation.

Code Review

Before opening a PR, use Claude Code as a first-pass reviewer:

> Review the changes I've made in this branch for correctness, edge cases,
  and adherence to our patterns. Pay particular attention to the error
  handling and any place where I'm making assumptions about input shape.

Claude Code reads the diff (git diff main...HEAD), cross-references against the existing codebase, and produces a structured review. It catches things that automated linters miss: incorrect business logic, missing null checks in paths that reach the database, inconsistent error handling between similar endpoints.

Learning a New Codebase

When you join an existing project or start working in an unfamiliar area:

> Walk me through how a new user signs up. Start from the frontend form
  submission and trace through to the database insert. Explain each layer
  and what it's responsible for.

This is substantially faster than reading the code yourself. Claude Code traces the execution path, explains each component, and can answer follow-up questions. Think of it as having an onboarding session with the original author.

Slash Commands

Claude Code has built-in slash commands for common operations:

/help          # List all available commands
/clear         # Clear conversation context (start fresh)
/commit        # Stage and commit changes with a generated commit message
/review        # Review the current diff (equivalent to the review workflow above)
/cost          # Show token usage and estimated cost for the current session
/exit          # Exit Claude Code

The /commit command is worth calling out. When you’ve made a set of changes and want to commit:

> /commit

Claude Code will:

  1. Run git diff --staged (and git diff if nothing is staged)
  2. Generate a commit message that accurately describes the changes
  3. Show you the message for approval
  4. Stage the relevant files and commit

The generated messages are significantly better than generic “fix stuff” commits. They follow conventional commit format and describe the why, not just the what.

CLAUDE.md — Persistent Project Instructions

CLAUDE.md is a Markdown file you commit to the root of your repository. Claude Code reads it automatically at the start of every session. Think of it as the README.md that Claude Code reads instead of humans — it sets the context, conventions, and constraints for AI-assisted work on that project.

This is the equivalent of the “Project Structure” and “Development Guidelines” sections of a good README, but written specifically so an AI agent knows how to behave in your codebase.

Our team’s CLAUDE.md template:

# Project: [Project Name]

## Stack
- Runtime: Node.js 22, TypeScript 5.x
- Framework: [NestJS / Next.js / Nuxt]
- Database: PostgreSQL via Prisma
- Package manager: pnpm

## Architecture Overview
[2-3 sentences describing the main modules and their relationships]

## Key Conventions
- Services throw domain exceptions (`NotFoundException`, `ConflictException`)
  — controllers do not catch them
- All database access goes through Prisma service, never raw SQL
- DTOs use `class-validator` decorators for request validation
- Feature modules follow the pattern in `src/orders/` — use it as reference
- Environment variables are typed in `src/config/config.service.ts`
- Do not use `any` — use `unknown` and narrow, or define the type

## Testing
- Unit tests: Vitest, colocated with source files (`*.test.ts`)
- E2E tests: Playwright in `tests/e2e/`
- Run tests: `pnpm test`

## Branch and PR Conventions
- Branch naming: `feat/TICKET-123-short-description`, `fix/TICKET-123-description`
- All PRs require a passing CI run before merge
- Squash merge to main

## Commands
- `pnpm dev` — start development server
- `pnpm build` — production build
- `pnpm test` — run unit tests
- `pnpm lint` — ESLint check
- `pnpm db:migrate` — run pending Prisma migrations
- `pnpm db:studio` — open Prisma Studio

## Do Not
- Do not modify `prisma/schema.prisma` without also generating a migration
- Do not commit `.env` files
- Do not use `console.log` — use the Logger service
- Do not hardcode URLs — use the config service

When Claude Code reads this file, it knows how your project is structured before you give it any instructions. You spend less time explaining context and more time giving tasks.

Best Practices

Use Claude Code for tasks, not completions. If you need the next line of code, IntelliSense or Copilot is faster. Claude Code is for tasks that require reading context, making decisions across files, and producing a complete result.

Be specific about scope and constraints. “Refactor the auth system” is a bad instruction. “Extract the JWT validation logic from auth.service.ts into a separate jwt.service.ts, update the module registration, and leave the business logic unchanged” is a good instruction.

Review everything. Claude Code is genuinely good at understanding intent and pattern-matching across a codebase. It is not infallible. Review the diff it produces before accepting it, the same way you would review a PR from a junior engineer. The speed gains come from delegation, not from eliminating review.

Keep sessions focused. A session that wanders across multiple unrelated tasks accumulates context that becomes noise. A fresh session per logical task produces better results.

Write CLAUDE.md before you need it. Teams that skip the CLAUDE.md spend the first few instructions of every session re-explaining how the project works. Write it once; it pays dividends across every session from that point forward.

Know when not to use it. Simple edits, well-understood code, learning exercises — doing these yourself is faster and builds competence. Use Claude Code for tasks where the bottleneck is time, not understanding.

Key Differences

ConcernGitHub CopilotClaude Code
Access modelIn-editor, cursor-basedTerminal agent, full file system access
ContextCurrent fileEntire codebase (what it reads)
Task scopeLine/block completionMulti-file, multi-step tasks
InteractionAccept/reject suggestionReview completed work
Git awarenessNoneReads diffs, can commit
CI awarenessNoneCan run tests, interpret failures
Persistent configNoneCLAUDE.md per repository
Best forBoilerplate, autocompleteRefactoring, feature generation, debugging

Gotchas for .NET Engineers

Gotcha 1: Claude Code Writes What You Describe, Not What You Mean

In Visual Studio, IntelliSense knows the exact type signatures in your project. When Copilot suggests code, it is constrained by what the compiler will accept. Claude Code operates on intent — it generates code based on what you describe and what it reads in context. If your description is ambiguous or your CLAUDE.md is missing key conventions, the output will be plausible but wrong in ways that compile cleanly.

Example: if you ask Claude Code to “add caching to the products endpoint” without specifying your caching infrastructure, it may implement an in-memory cache using a plain Map, when your team uses Redis via a shared CacheService. The fix is specificity: “add caching using our CacheService following the pattern in src/orders/orders.service.ts.”

This is not a limitation to work around — it is the correct mental model. Claude Code is a fast, capable colleague who needs clear requirements, not a compiler that enforces correctness.

Gotcha 2: The Context Window Is Not Unlimited

Claude Code can read a large amount of code in a session, but there is a limit. On very large codebases with many files read and many code changes made, the context approaches capacity. Signs that you are near the limit: Claude Code starts asking about things it already addressed earlier in the session, or its suggestions stop reflecting earlier constraints you established.

The fix is to start fresh sessions for distinct tasks and keep CLAUDE.md concise. The CLAUDE.md is always loaded at session start — it is your most reliable persistent context. Do not rely on information from earlier in a long session for complex, late-session tasks.

Gotcha 3: Claude Code Has No Access to External Systems Unless You Give It

Claude Code can read files and run terminal commands. It cannot access your Azure DevOps board, your Jira tickets, your Slack messages, or your database unless you provide credentials and tools for those systems. When you say “look at the ticket for context,” Claude Code cannot do that — you need to paste the relevant requirements into the conversation.

In practice, this means: paste the acceptance criteria, error messages, and relevant context directly into the session. Do not assume Claude Code can fetch it from somewhere. This is a deliberate security model — Claude Code operates in your shell environment with your credentials, and it explicitly does not reach out to services unless you invoke the relevant commands.

Gotcha 4: Generated Code Reflects the Context It Was Given

If the code Claude Code reads as reference examples has problems — inconsistent error handling, missing types, poor patterns — the generated code will reflect those problems. “Generate a new endpoint following this pattern” is only as good as the pattern. Claude Code amplifies whatever quality level it sees in the existing code.

This is actually useful as a diagnostic: if Claude Code consistently generates poor code for a particular part of your codebase, it is often a signal that the existing code in that area is not a good example to pattern-match against.

Gotcha 5: /commit Stages Files; Review Before Confirming

The /commit command will suggest staged files and a commit message. Confirm that the right files are included before accepting. It is easy to accidentally include debug files, temporary test scripts, or generated files that should be in .gitignore. Claude Code commits what is in your working tree, not necessarily what you intended to commit.

Establish a habit: after /commit proposes a commit, run git diff --staged to verify the contents before confirming.

Hands-On Exercise

Set up Claude Code on the project you are currently working on and complete the following tasks:

Task 1: Create your CLAUDE.md

Run claude in your project root and ask:

> Read the project structure and the main source files, then generate a CLAUDE.md
  file that documents the stack, key conventions, important commands, and things
  to avoid. Focus on what a developer would need to know to contribute effectively.

Review the output, correct any inaccuracies (Claude Code may guess at some conventions), and commit it.

Task 2: Use Claude Code for a real task

Pick a small, self-contained feature or bug fix you were planning to do anyway. Write it as an instruction in Claude Code instead of writing the code yourself. Review the output thoroughly.

Questions to answer after:

  • Did Claude Code follow your existing patterns?
  • What did you have to correct?
  • What would have improved the initial instruction?

Task 3: Code review workflow

Make a small set of changes to your project — either the changes from Task 2 or a new set. Then:

> /review

Read the review. Did it catch anything you missed? Did it flag false positives? Use this to calibrate how you weigh Claude Code’s review feedback going forward.

Quick Reference

Session Commands

CommandWhat It Does
claudeStart interactive session in current directory
claude "instruction"Start session with initial instruction
claude -p "instruction"Run one-shot (non-interactive)
/clearClear conversation context
/commitStage and commit changes
/reviewReview current diff
/costShow token usage for session
/exitExit Claude Code

Instruction Patterns That Work

GoalPattern
Follow existing patterns“…following the pattern in src/orders/
Constrain scope“Only modify files in src/payments/
Preserve behavior“Change the implementation, not the public interface”
Understand first“Read X and Y, explain the flow, then ask me before making changes”
Safe iteration“Show me the diff before writing anything”

CLAUDE.md Sections (Team Standard)

# Project: [Name]
## Stack
## Architecture Overview
## Key Conventions
## Testing
## Branch and PR Conventions
## Commands
## Do Not

When to Use Claude Code vs. Code Manually

Use Claude CodeCode Manually
Cross-file refactoringSingle-line edits
Generating code from established patternsLearning new concepts
Debugging across multiple layersSimple, well-understood fixes
Code review before PRExploratory/experimental code
Understanding unfamiliar codebasesPerformance-critical, subtle logic

Further Reading

CLI-First Workflow: Visual Studio to the Terminal

For .NET engineers who know: Visual Studio 2022, Solution Explorer, the Build menu, NuGet Package Manager, Team Explorer, SQL Server Management Studio You’ll learn: The terminal-based equivalents of every Visual Studio workflow our team uses, and how to become fluent in a CLI-first development environment Time: 15-20 min read

The .NET Way (What You Already Know)

Visual Studio is one of the most capable IDEs ever built. The Build menu compiles and runs. Solution Explorer shows the project tree. NuGet Package Manager resolves dependencies. Team Explorer (or the newer Git integration) handles branches and PRs. The integrated debugger sets breakpoints with a click. The test runner runs and visualizes tests. SSMS or the built-in data tools query the database. Everything is point-and-click, keyboard-shortcut-driven, and integrated.

This is a legitimately good development experience. The tradeoff is that it is opaque (you rarely know exactly what MSBuild is doing), hard to script (you cannot easily automate a sequence of IDE actions), and tied to a specific machine configuration (the workspace settings, extensions, and window layouts live on your machine).

Our stack is different: the primary interface is the terminal, and VS Code is a code editor that sits alongside it, not above it. This article is about making that transition comfortable.

The JS/TS Stack Way

Why CLI-First

The shift to terminal-first is not aesthetic preference. There are practical reasons:

Reproducibility. A command you type in the terminal can be put in a Makefile, a GitHub Actions workflow, a package.json script, or a team wiki. A sequence of clicks in a GUI cannot.

Speed. For common tasks — installing packages, running tests, checking git status, switching branches — the terminal is faster once muscle memory is established.

Portability. Every team member, every CI server, and every Docker container has a shell. Not everyone has Visual Studio.

Scriptability. Common sequences become one-line aliases. Multi-step setups become shell scripts. Repetitive tasks become automated.

The mental model shift: in Visual Studio, the GUI is the primary interface and the terminal is optional. In our workflow, the terminal is the primary interface and the GUI (VS Code) is the editor.

Essential CLI Tools and Their VS Equivalents

pnpm — The Package Manager (NuGet + dotnet CLI)

# Install all dependencies (like: right-click solution → Restore NuGet Packages)
pnpm install

# Add a package (like: NuGet Package Manager → Install)
pnpm add zod

# Add a dev dependency (like: NuGet Package Manager → Install, but marked PrivateAssets="all")
pnpm add -D vitest

# Remove a package
pnpm remove lodash

# Update a specific package
pnpm update zod

# Run a script defined in package.json (like: Build menu → Build/Run/Test)
pnpm dev         # Start development server
pnpm build       # Production build
pnpm test        # Run tests
pnpm lint        # ESLint check

# Add a global CLI tool (like: dotnet tool install -g)
pnpm add -g @anthropic/claude-code

# List installed packages
pnpm list

# Check for security vulnerabilities (like: Dependabot or OWASP check)
pnpm audit

gh — GitHub CLI (Team Explorer + Azure DevOps)

The gh CLI is the terminal interface to GitHub. For .NET engineers used to Team Explorer or Azure DevOps, this is the equivalent for every workflow that involves PRs, issues, and repository management.

# Install (macOS)
brew install gh

# Authenticate
gh auth login

# Create a PR for the current branch (like: Create Pull Request in Team Explorer)
gh pr create --title "feat: add invoice sending" --body "Closes #123"

# List open PRs
gh pr list

# View a specific PR
gh pr view 42

# Check out a PR locally (useful for reviewing)
gh pr checkout 42

# View PR status (CI checks, review status)
gh pr status

# Merge a PR (squash — our default)
gh pr merge 42 --squash

# List issues
gh issue list

# Create an issue
gh issue create --title "Bug: webhook 500 on retry" --body "..."

# View repository in browser
gh browse

# View CI run status (like: Azure Pipelines build status)
gh run list
gh run view [run-id]
gh run watch  # Live watch current CI run

gh removes the need to switch between terminal and browser for most PR and CI workflows. Use gh pr status before a standup to see what is waiting for your review.

git — Version Control (Team Explorer + git)

.NET engineers who use Team Explorer may not have deep git CLI fluency. These are the commands used daily:

# Status, staging, committing
git status
git add src/payments/payments.service.ts src/payments/payments.module.ts
git commit -m "feat: add webhook retry logic"

# Branching
git checkout -b feat/PROJ-123-add-webhook-retry
git checkout main
git branch --list
git branch -d feat/PROJ-123-add-webhook-retry  # Delete merged branch

# Syncing
git fetch origin
git pull origin main
git push origin feat/PROJ-123-add-webhook-retry

# Inspection
git log --oneline -20          # Last 20 commits, condensed
git diff                       # Unstaged changes
git diff --staged              # Staged changes (what will be committed)
git diff main...HEAD           # All changes since branching from main

# Stashing (like: shelving in TFS)
git stash
git stash pop
git stash list

psql — PostgreSQL Client (SQL Server Management Studio)

Our database is PostgreSQL. psql is the terminal client. It is less graphical than SSMS but more scriptable.

# Connect to a database
psql postgresql://user:password@localhost:5432/mydb

# Or using environment variable (set in .env)
psql $DATABASE_URL

# Inside psql:
\l           -- List databases (like: Object Explorer → Databases)
\c mydb      -- Switch to database
\dt          -- List tables (like: Object Explorer → Tables)
\d users     -- Describe table schema (like: right-click → Design)
\i file.sql  -- Execute a SQL file
\q           -- Quit

# Run a one-liner query without entering the REPL
psql $DATABASE_URL -c "SELECT count(*) FROM users;"

In practice, most database interaction goes through Prisma Studio for data browsing and Prisma migrations for schema changes. psql is for debugging, running ad-hoc queries, and scripted operations.

# Prisma equivalents for the most common SSMS tasks
pnpm prisma studio          # Open Prisma Studio (web UI, like SSMS table viewer)
pnpm prisma migrate dev     # Apply pending migrations (like: Update-Database)
pnpm prisma migrate status  # Check migration status (like: __EFMigrationsHistory)
pnpm prisma db pull         # Reverse-engineer schema from DB (like: Scaffold-DbContext)

docker — Containers (Visual Studio Container Tools)

Docker is used locally to run PostgreSQL and other services without installing them directly.

# Start the local development database
docker compose up -d

# Stop it
docker compose down

# View running containers
docker ps

# View logs from a specific container
docker logs my-postgres

# Open a shell inside a container
docker exec -it my-postgres bash

# Remove all stopped containers and unused images (periodic cleanup)
docker system prune

Our projects include a docker-compose.yml that defines the local development stack. docker compose up -d replaces “install and configure SQL Server Express” as the database setup step for new team members.

# docker-compose.yml — typical project setup
services:
  postgres:
    image: postgres:16
    environment:
      POSTGRES_DB: myapp_dev
      POSTGRES_USER: myapp
      POSTGRES_PASSWORD: devpassword
    ports:
      - "5432:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data

volumes:
  postgres_data:

Shell Productivity

Aliases

Shell aliases reduce multi-word commands to single letters. Add these to your ~/.zshrc or ~/.bashrc:

# Git aliases
alias gs="git status"
alias ga="git add"
alias gc="git commit"
alias gp="git push"
alias gl="git log --oneline -20"
alias gd="git diff"
alias gds="git diff --staged"
alias gco="git checkout"
alias gcb="git checkout -b"

# pnpm aliases
alias pd="pnpm dev"
alias pb="pnpm build"
alias pt="pnpm test"
alias pl="pnpm lint"
alias pi="pnpm install"
alias pa="pnpm add"

# Navigation
alias ..="cd .."
alias ...="cd ../.."
alias ll="ls -la"

# Project-specific (customize per project)
alias dbstudio="pnpm prisma studio"
alias dbmigrate="pnpm prisma migrate dev"

After editing .zshrc, reload it:

source ~/.zshrc

Shell Functions

For multi-step operations, functions are more useful than aliases:

# Start a new feature branch with the right naming convention
# Usage: newfeat PROJ-123 add-webhook-retry
newfeat() {
  git checkout main
  git pull origin main
  git checkout -b "feat/$1-$2"
  echo "Created branch: feat/$1-$2"
}

# Create a PR from the current branch
# Usage: pr "Add webhook retry logic"
pr() {
  local branch=$(git rev-parse --abbrev-ref HEAD)
  gh pr create --title "$1" --body "" --draft
  echo "Draft PR created for branch: $branch"
}

# Clean up merged local branches
cleanup-branches() {
  git fetch --prune
  git branch --merged main | grep -v "^\* \|  main$" | xargs -r git branch -d
  echo "Cleaned up merged branches."
}

Add these to your ~/.zshrc after the aliases.

Zsh and Bash both support searching command history. The most efficient approach:

# In zsh: Ctrl+R opens an interactive history search
# Type any part of a previous command, press Ctrl+R to cycle through matches

# Or install fzf for fuzzy history search (highly recommended)
brew install fzf
# Follow the install instructions to enable fzf key bindings:
$(brew --prefix)/opt/fzf/install

# After fzf install: Ctrl+R opens a fuzzy-searchable history picker

The fzf integration for history search is one of the highest-leverage quality-of-life improvements available. Once installed, searching “prisma migrate dev” in history is a 3-keystroke operation.

Tab Completion

Zsh has excellent tab completion. For it to work well with tools like git, gh, and pnpm:

# Install Oh My Zsh (optional but useful — sets up completions automatically)
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

# Or manually: enable zsh completions in ~/.zshrc
autoload -Uz compinit
compinit

# pnpm tab completion (add to ~/.zshrc)
[[ -f ~/.config/tabtab/__tabtab.zsh ]] && . ~/.config/tabtab/__tabtab.zsh

Tab completion for git shows branch names. Tab completion for gh shows subcommands. Tab completion for pnpm shows scripts from package.json. These alone eliminate a significant amount of typing.

Terminal Multiplexing with tmux

When working on a feature, you typically need multiple terminal panes open simultaneously: one for the dev server, one for running commands, one for logs. Instead of multiple terminal windows, use tmux (terminal multiplexer).

# Install
brew install tmux

# Start a new named session
tmux new -s myproject

# Key bindings (Ctrl+B is the prefix key):
Ctrl+B, c         # Create new window (like a new tab)
Ctrl+B, ,         # Rename current window
Ctrl+B, n         # Next window
Ctrl+B, p         # Previous window
Ctrl+B, %         # Split pane vertically
Ctrl+B, "         # Split pane horizontally
Ctrl+B, arrow     # Move between panes
Ctrl+B, z         # Zoom current pane (full screen toggle)
Ctrl+B, d         # Detach from session (leaves it running)

# Reattach to a detached session
tmux attach -t myproject

# List active sessions
tmux ls

A typical tmux layout for our stack:

+----------------------------------+
| pnpm dev (NestJS dev server)     |
+------------------+---------------+
| git / gh / pnpm  | docker logs   |
+------------------+---------------+

You leave pnpm dev running in the top pane all day, and use the bottom panes for everything else. When you detach (Ctrl+B, d) and reattach later, the dev server is still running exactly where you left it.

VS Code Terminal Integration

VS Code has an integrated terminal (Ctrl+`). For most work, the integrated terminal and the external terminal are interchangeable. The integrated terminal has one advantage: it opens at the workspace root automatically.

Useful VS Code terminal features for our workflow:

# Split the terminal (like tmux but in VS Code)
# Terminal → Split Terminal (or Ctrl+Shift+5 on Mac)

# Run a task from package.json without typing in the terminal
# Terminal → Run Task → select from package.json scripts

# Open a new terminal and run a command
# Ctrl+Shift+P → "Create New Integrated Terminal"

One common point of confusion: VS Code’s terminal inherits the shell environment from when VS Code was launched, not from your current .zshrc state. If you add an alias to .zshrc and it does not appear in VS Code’s terminal, restart VS Code or open a fresh terminal tab.

Here is a minimal but effective ~/.zshrc for our stack:

# ~/.zshrc

# ---- Node Version Manager ----
# Use fnm (fast node manager) instead of nvm — same concept, much faster
# Install: curl -fsSL https://fnm.vercel.app/install | bash
eval "$(fnm env --use-on-cd)"

# ---- pnpm ----
export PNPM_HOME="$HOME/Library/pnpm"
export PATH="$PNPM_HOME:$PATH"

# ---- Aliases ----
alias gs="git status"
alias ga="git add"
alias gp="git push"
alias gl="git log --oneline -20"
alias gd="git diff"
alias gds="git diff --staged"
alias gco="git checkout"
alias gcb="git checkout -b"
alias pd="pnpm dev"
alias pt="pnpm test"
alias pl="pnpm lint"
alias ll="ls -la"

# ---- Functions ----
newfeat() {
  git checkout main && git pull origin main && git checkout -b "feat/$1-$2"
}

# ---- History ----
HISTSIZE=10000
SAVEHIST=10000
setopt SHARE_HISTORY
setopt HIST_IGNORE_DUPS

# ---- Completions ----
autoload -Uz compinit
compinit

# ---- fzf (fuzzy history search) ----
[ -f ~/.fzf.zsh ] && source ~/.fzf.zsh

Key Differences

TaskVisual StudioOur CLI Workflow
Build projectBuild menu → Build Solutionpnpm build
Run with hot reloadDebug → Start Debugging (or F5)pnpm dev
Run testsTest Explorer → Run Allpnpm test
Install a packageNuGet Package Manager GUIpnpm add [package]
Create a branchTeam Explorer → Branchesgit checkout -b feat/...
Create a PRTeam Explorer → Pull Requestsgh pr create
View PR statusAzure DevOps browsergh pr status
Query the databaseSSMS GUIpsql $DATABASE_URL or Prisma Studio
Run a migrationPM Console: Update-Databasepnpm prisma migrate dev
Start a containerDocker Desktop GUIdocker compose up -d
View container logsDocker Desktop GUIdocker logs [name]
Run multiple terminalsMultiple VS windowstmux panes

Gotchas for .NET Engineers

Gotcha 1: There Is No Build Button — Every Step Is Explicit

In Visual Studio, pressing F5 implicitly compiles, resolves references, starts the app, and attaches the debugger. Our stack does not have an equivalent one-button experience. pnpm dev starts the development server with hot reload — but if dependencies are not installed, you get an error. If a migration is missing, the app starts but database calls fail. If environment variables are not set, the app crashes at startup.

The discipline required: when starting work on a project, follow a checklist before running pnpm dev:

pnpm install                    # Ensure dependencies are current
docker compose up -d            # Ensure the database is running
pnpm prisma migrate dev         # Ensure migrations are applied
cp .env.example .env            # Ensure .env exists (first time)
pnpm dev                        # Now start the server

This checklist is what Visual Studio’s F5 used to do invisibly. Knowing the steps explicitly is actually better — it makes the system understandable and reproducible.

Gotcha 2: Shell State Does Not Persist Between Sessions

Environment variables set with export VAR=value in a terminal session are gone when you close the terminal. Configuration that should persist goes in ~/.zshrc or .env files. This catches .NET engineers who are used to system-level environment variables (set via System Properties on Windows) persisting indefinitely.

The pattern:

  • Project-specific variables go in .env (committed as .env.example, never committed with real values)
  • Tool configuration goes in ~/.zshrc (aliases, PATH modifications)
  • Secret values go in .env files or your shell profile, never in package.json scripts

If a command works in one terminal but not another, check whether you sourced ~/.zshrc after a recent change.

Gotcha 3: Command Availability Depends on PATH

On Windows with Visual Studio, tools are available globally after installation because the installer modifies the system PATH. On macOS/Linux, a tool installed via pnpm add -g or brew install is only available in terminals where the relevant directories are in your PATH. If pnpm is not in your PATH, pnpm: command not found. If gh is not in your PATH after a brew install, you need to restart your terminal or source your shell config.

The diagnostic: which gh, which pnpm, which node. If the command returns a path, the tool is available. If it returns nothing, it is either not installed or not in PATH.

Gotcha 4: Git CLI Requires Explicit Staging

Visual Studio’s Git integration shows modified files and commits them in a GUI. The git CLI requires explicitly staging files with git add before committing. This trips up engineers who run git commit -m "message" and get a message like “nothing to commit, working tree clean” — because the files are modified but not staged.

The workflow:

git status          # See what is modified (equivalent to Team Explorer's Changes view)
git diff            # See what changed (equivalent to the diff view)
git add src/file.ts # Stage a specific file
git add -p          # Interactively stage hunks (very useful for partial commits)
git commit -m "..."

git add -p (patch mode) lets you review and selectively stage parts of a file — useful when you have multiple logical changes in one file and want to commit them separately. There is no equivalent in most GUI tools.

Gotcha 5: pnpm Scripts Are Not Global — They Run in Context

When you run pnpm test in a monorepo root, it runs the test script defined in the root package.json. When you run it inside a package directory, it runs that package’s test script. The behavior changes depending on where your terminal’s working directory is. This is different from MSBuild, which builds the entire solution regardless of where you invoke it.

If pnpm test seems to do nothing, or runs the wrong tests, check pwd to confirm you are in the right directory.

Hands-On Exercise

Complete this setup on your machine. Every step is a CLI command — do not use any GUIs.

Step 1: Install the tools

# Package manager
brew install pnpm

# GitHub CLI
brew install gh

# Node version manager (fnm)
curl -fsSL https://fnm.vercel.app/install | bash

# tmux
brew install tmux

# fzf
brew install fzf
$(brew --prefix)/opt/fzf/install

# Authenticate with GitHub
gh auth login

Step 2: Configure your shell

Add the aliases, functions, and tool initializations from the “Recommended Shell Configuration” section above to your ~/.zshrc. Reload it:

source ~/.zshrc

Verify aliases work:

gs        # Should run git status
ll        # Should run ls -la

Step 3: Clone a project and start it with CLI only

# Clone a project you work on
gh repo clone [org/repo]
cd [repo]

# Install dependencies
pnpm install

# Start the database
docker compose up -d

# Apply migrations
pnpm prisma migrate dev

# Start the dev server
pnpm dev

Step 4: Create a tmux session

tmux new -s dev

# In the new session, split into panes:
# Top pane: pnpm dev (already running)
# Bottom left: for git and gh commands
# Bottom right: for docker logs

Practice navigating between panes with Ctrl+B, arrow and detaching/reattaching with Ctrl+B, d and tmux attach -t dev.

Step 5: Practice the PR workflow entirely in the terminal

git checkout -b feat/cli-practice
# Make a trivial change (add a comment to any file)
git add [file]
git commit -m "chore: cli workflow practice"
git push origin feat/cli-practice
gh pr create --title "CLI practice PR" --body "Practice only, do not merge" --draft
gh pr view --web  # Open in browser to verify it worked
gh pr close $(gh pr list --json number --jq '.[0].number')  # Close it
git checkout main
git branch -d feat/cli-practice

Quick Reference

First-Day CLI Cheat Sheet

TaskCommand
Install dependenciespnpm install
Start dev serverpnpm dev
Run testspnpm test
Run tests in watch modepnpm test --watch
Lint codepnpm lint
Build for productionpnpm build
Start databasedocker compose up -d
Apply migrationspnpm prisma migrate dev
Open Prisma Studiopnpm prisma studio
Git statusgit status
Create feature branchgit checkout -b feat/PROJ-123-description
Stage and commitgit add [files] && git commit -m "message"
Push branchgit push origin [branch-name]
Create PRgh pr create
Check PR statusgh pr status
View CI runsgh run list
Watch current CI rungh run watch
Connect to databasepsql $DATABASE_URL
Container logsdocker logs [container-name]

Essential Tool Summary

.NET / VS ToolCLI EquivalentInstall
NuGet Package Managerpnpm add [pkg]brew install pnpm
dotnet CLIpnpm [script](with pnpm)
Team Explorer / Gitgit + ghbrew install gh
SQL Server Management Studiopsql / Prisma Studiobrew install libpq
Docker Desktopdocker CLIbrew install --cask docker
Azure DevOps browsergh pr / gh run(with gh)
Multiple VS windowstmuxbrew install tmux
History searchfzf (Ctrl+R)brew install fzf

Further Reading

VS Code Configuration for TypeScript Development

For .NET engineers who know: Visual Studio 2022 — the IDE with integrated compiler, debugger, NuGet, test runner, designer, and everything else You’ll learn: The mental model shift from “full IDE” to “editor with extensions,” and how to configure VS Code to match Visual Studio’s productivity for TypeScript development Time: 10-15 min read

The .NET Way (What You Already Know)

Visual Studio is a full IDE. It has a built-in C# compiler (Roslyn), a code formatter, a debugger with process attach, a test runner with visual results, a NuGet UI, a project designer, and built-in Git integration. When you install Visual Studio, you get a coherent, co-designed set of tools maintained by Microsoft. The behavior of IntelliSense, the debugger, and the test runner are part of the product.

Extensions in Visual Studio exist but are secondary. The core experience works out of the box.

VS Code is not this. VS Code is a text editor with a powerful extension API. Without extensions, it is a fast, cross-platform text editor with good syntax highlighting and multi-cursor editing. With the right extensions, it becomes a productive TypeScript development environment. The TypeScript language server, the ESLint integration, the debugger, the test runner — each of these is an extension that must be installed and configured.

This distinction matters because: the configuration is your responsibility, it should be committed to the repository so the team shares it, and the experience is composable rather than fixed.

The VS Code Way

Essential Extensions

Install these extensions. The extension IDs are the exact identifiers to use with the Extensions panel or code --install-extension.

Core TypeScript Development

ExtensionID.NET Equivalent
ESLintdbaeumer.vscode-eslintRoslyn analyzer red squiggles
Prettier - Code Formatteresbenp.prettier-vscodeVisual Studio format-on-save
TypeScript (built-in)(built-in)Roslyn IntelliSense
Error Lensusernamehw.errorlensInline error display in Visual Studio
Path IntelliSensechristian-kohler.path-intellisenseFile path autocomplete in string literals

Git and Code Review

ExtensionID.NET Equivalent
GitLenseamodio.gitlensTeam Explorer git history + line-level blame
GitHub Pull Requestsgithub.vscode-pull-request-githubTeam Explorer PR integration

AI Assistance

ExtensionID.NET Equivalent
GitHub Copilotgithub.copilotGitHub Copilot for Visual Studio
GitHub Copilot Chatgithub.copilot-chatCopilot Chat panel

Framework-Specific

ExtensionIDWhen to Install
ES7+ React/Redux Snippetsdsznajder.es7-react-js-snippetsReact projects
Vue - Officialvue.volarVue 3 projects
Tailwind CSS IntelliSensebradlc.vscode-tailwindcssProjects using Tailwind
Prismaprisma.prismaAny project using Prisma

Quality of Life

ExtensionIDWhat It Does
Auto Rename Tagformulahendry.auto-rename-tagRenames matching HTML/JSX tag
Bracket Pair Colorizer(built-in since VS Code 1.60)Color-matched brackets
Todo Treegruntfuturist.todo-treeLists TODO comments across project

Installing Extensions from the CLI

# Install all team-standard extensions at once
code --install-extension dbaeumer.vscode-eslint
code --install-extension esbenp.prettier-vscode
code --install-extension usernamehw.errorlens
code --install-extension eamodio.gitlens
code --install-extension github.copilot
code --install-extension github.copilot-chat
code --install-extension github.vscode-pull-request-github
code --install-extension christian-kohler.path-intellisense
code --install-extension prisma.prisma
code --install-extension dsznajder.es7-react-js-snippets

Or open the Extensions panel (Ctrl+Shift+X / Cmd+Shift+X), search by name, and install.

Workspace Settings — .vscode/settings.json

VS Code settings exist at three levels: user (global), workspace (.vscode/settings.json), and folder (multi-root). Workspace settings are committed to the repository and override user settings for that project. This is how the team shares configuration.

Commit the following .vscode/settings.json to every TypeScript project:

{
  // ---- Formatting ----
  "editor.formatOnSave": true,
  "editor.defaultFormatter": "esbenp.prettier-vscode",
  "editor.tabSize": 2,
  "editor.insertSpaces": true,
  "editor.rulers": [100],
  "editor.wordWrap": "off",

  // ---- TypeScript-specific formatting ----
  "[typescript]": {
    "editor.defaultFormatter": "esbenp.prettier-vscode"
  },
  "[typescriptreact]": {
    "editor.defaultFormatter": "esbenp.prettier-vscode"
  },
  "[javascript]": {
    "editor.defaultFormatter": "esbenp.prettier-vscode"
  },

  // ---- JSON formatting ----
  "[json]": {
    "editor.defaultFormatter": "esbenp.prettier-vscode"
  },
  "[jsonc]": {
    "editor.defaultFormatter": "esbenp.prettier-vscode"
  },

  // ---- ESLint ----
  "eslint.validate": [
    "javascript",
    "javascriptreact",
    "typescript",
    "typescriptreact"
  ],
  "editor.codeActionsOnSave": {
    "source.fixAll.eslint": "explicit"
  },

  // ---- TypeScript ----
  "typescript.preferences.importModuleSpecifier": "relative",
  "typescript.updateImportsOnFileMove.enabled": "always",
  "typescript.suggest.autoImports": true,
  "typescript.inlayHints.parameterNames.enabled": "literals",
  "typescript.inlayHints.variableTypes.enabled": false,

  // ---- Explorer ----
  "files.exclude": {
    "**/node_modules": true,
    "**/.git": true,
    "**/dist": true,
    "**/.next": true
  },
  "search.exclude": {
    "**/node_modules": true,
    "**/dist": true,
    "**/.next": true,
    "**/pnpm-lock.yaml": true
  },

  // ---- Editor Experience ----
  "editor.minimap.enabled": false,
  "editor.bracketPairColorization.enabled": true,
  "editor.guides.bracketPairs": "active",
  "editor.linkedEditing": true,
  "editor.suggest.preview": true
}

What each setting does, mapped to .NET equivalents:

SettingVS Code EffectVisual Studio Equivalent
editor.formatOnSaveRuns Prettier on every saveFormat document on save (Tools → Options)
editor.codeActionsOnSave with ESLintRuns ESLint auto-fixes on every saveQuick Fix on save (not a VS feature, but similar)
typescript.updateImportsOnFileMoveUpdates imports when you rename/move a fileVisual Studio does this for C# automatically
editor.linkedEditingRenames matching HTML/JSX tag when you rename oneNot a Visual Studio feature
search.excludeHides node_modules from search resultsSolution Explorer doesn’t show external packages

The extensions.json file prompts every developer who opens the repository to install the recommended extensions. This eliminates “it works on my machine” due to missing extensions.

{
  "recommendations": [
    "dbaeumer.vscode-eslint",
    "esbenp.prettier-vscode",
    "usernamehw.errorlens",
    "eamodio.gitlens",
    "github.copilot",
    "github.copilot-chat",
    "christian-kohler.path-intellisense",
    "prisma.prisma"
  ],
  "unwantedRecommendations": [
    "hookyqr.beautify",
    "ms-vscode.vscode-typescript-tslint-plugin"
  ]
}

unwantedRecommendations suppresses the prompt for extensions that conflict with your setup. beautify conflicts with Prettier. The old tslint plugin is superseded by @typescript-eslint.

When a developer opens the project, VS Code shows a notification: “Do you want to install the recommended extensions?” One click installs everything in the recommendations array.

Debugging Configuration — .vscode/launch.json

The debugger in VS Code is configured via launch.json. This is the equivalent of Visual Studio’s debug profiles (the dropdown next to the Run button).

For a NestJS API:

{
  "version": "0.2.0",
  "configurations": [
    {
      "name": "Debug NestJS",
      "type": "node",
      "request": "launch",
      "program": "${workspaceFolder}/src/main.ts",
      "runtimeArgs": ["--nolazy", "-r", "ts-node/register"],
      "sourceMaps": true,
      "cwd": "${workspaceFolder}",
      "envFile": "${workspaceFolder}/.env",
      "console": "integratedTerminal",
      "restart": true
    },
    {
      "name": "Attach to Running Process",
      "type": "node",
      "request": "attach",
      "port": 9229,
      "sourceMaps": true,
      "restart": true
    }
  ]
}

For a Next.js application:

{
  "version": "0.2.0",
  "configurations": [
    {
      "name": "Debug Next.js (Server)",
      "type": "node",
      "request": "launch",
      "program": "${workspaceFolder}/node_modules/.bin/next",
      "args": ["dev"],
      "cwd": "${workspaceFolder}",
      "envFile": "${workspaceFolder}/.env.local",
      "sourceMaps": true,
      "console": "integratedTerminal"
    },
    {
      "name": "Debug Next.js (Client — Chrome)",
      "type": "chrome",
      "request": "launch",
      "url": "http://localhost:3000",
      "webRoot": "${workspaceFolder}",
      "sourceMaps": true
    }
  ]
}

To debug:

  1. Set a breakpoint by clicking in the left gutter (same as Visual Studio)
  2. Press F5 or click the Run and Debug panel (Ctrl+Shift+D)
  3. Select the configuration and press the play button

The experience is close to Visual Studio but requires this upfront configuration. Once launch.json is committed, every team member gets the same debug configurations automatically.

Attaching to a running process works similarly to Visual Studio’s “Attach to Process” (Ctrl+Alt+P):

# Start the dev server with the inspector enabled
node --inspect src/main.js

# Or for pnpm scripts:
NODE_OPTIONS='--inspect' pnpm dev

Then use the “Attach to Running Process” configuration in launch.json.

Task Configuration — .vscode/tasks.json

Tasks in VS Code are runnable actions bound to keyboard shortcuts or the Command Palette. They are equivalent to Visual Studio’s external tools or custom build events.

{
  "version": "2.0.0",
  "tasks": [
    {
      "label": "pnpm: dev",
      "type": "shell",
      "command": "pnpm dev",
      "group": "build",
      "isBackground": true,
      "problemMatcher": [],
      "presentation": {
        "reveal": "always",
        "panel": "dedicated"
      }
    },
    {
      "label": "pnpm: test",
      "type": "shell",
      "command": "pnpm test",
      "group": {
        "kind": "test",
        "isDefault": true
      },
      "presentation": {
        "reveal": "always",
        "panel": "shared"
      }
    },
    {
      "label": "pnpm: lint",
      "type": "shell",
      "command": "pnpm lint",
      "group": "build",
      "presentation": {
        "reveal": "always",
        "panel": "shared"
      }
    },
    {
      "label": "Prisma: migrate dev",
      "type": "shell",
      "command": "pnpm prisma migrate dev",
      "group": "none",
      "presentation": {
        "reveal": "always",
        "panel": "shared"
      }
    },
    {
      "label": "Prisma: studio",
      "type": "shell",
      "command": "pnpm prisma studio",
      "group": "none",
      "isBackground": true,
      "presentation": {
        "reveal": "always",
        "panel": "dedicated"
      }
    }
  ]
}

Run tasks via:

  • Ctrl+Shift+P → “Tasks: Run Task” → select the task
  • Ctrl+Shift+B runs the default build task
  • Ctrl+Shift+T (not the default, but bindable) for the default test task

Multi-Root Workspaces (Monorepos)

If your project is a monorepo with a frontend and backend in separate directories, VS Code supports multi-root workspaces — a single VS Code window with multiple root folders, each with its own settings.

Create a .code-workspace file at the monorepo root:

{
  "folders": [
    {
      "name": "API (NestJS)",
      "path": "./apps/api"
    },
    {
      "name": "Web (Next.js)",
      "path": "./apps/web"
    },
    {
      "name": "Shared Packages",
      "path": "./packages"
    }
  ],
  "settings": {
    "editor.formatOnSave": true,
    "editor.defaultFormatter": "esbenp.prettier-vscode"
  },
  "extensions": {
    "recommendations": [
      "dbaeumer.vscode-eslint",
      "esbenp.prettier-vscode",
      "prisma.prisma"
    ]
  }
}

Open with: code my-monorepo.code-workspace

Each folder in the workspace can have its own .vscode/settings.json for folder-specific settings (e.g., the API folder might have NestJS-specific snippets enabled, the web folder might have React snippets). The workspace-level settings apply to all folders.

This is the equivalent of opening a Visual Studio Solution that contains multiple projects.

Keyboard Shortcuts

VS Code keyboard shortcuts will feel familiar if you have used Visual Studio, but some differ. The most important for TypeScript development:

ActionVS Code (Mac)VS Code (Win/Linux)Visual Studio Equivalent
Open file by nameCmd+PCtrl+PCtrl+, (Go to All)
Go to symbol in fileCmd+Shift+OCtrl+Shift+OCtrl+F12
Go to symbol in projectCmd+TCtrl+TSolution-wide search
Go to definitionF12F12F12
Peek definitionAlt+F12Alt+F12Alt+F12
Find all referencesShift+F12Shift+F12Shift+F12
Rename symbolF2F2F2
Open Command PaletteCmd+Shift+PCtrl+Shift+PCtrl+Q (Quick Launch)
Toggle terminalCtrl+`Ctrl+`Alt+F2 (terminal pane)
Format documentShift+Alt+FShift+Alt+FCtrl+K, Ctrl+D
Open ExtensionsCmd+Shift+XCtrl+Shift+XExtensions manager
Split editorCmd+\Ctrl+\Drag tab to split
Multi-cursorAlt+ClickAlt+ClickAlt+Click
Select all occurrencesCmd+Shift+LCtrl+Shift+LCtrl+Shift+H (replace)

Most valuable shortcut for TypeScript development: Cmd+P (Go to File). Type any part of a filename and navigate there instantly. This replaces browsing Solution Explorer.

Settings Sync

VS Code’s built-in Settings Sync (Cmd+Shift+P → “Settings Sync: Turn On”) syncs your personal settings, extensions, and keybindings across machines via your GitHub or Microsoft account. This covers your global user settings — the team workspace settings come from the repository.

Configure what to sync:

Settings Sync → Settings → What to Sync:
✓ Settings
✓ Keybindings
✓ Extensions
✓ UI State
✗ Profiles (can cause conflicts in team settings)

Team settings go in .vscode/settings.json (committed). Personal preferences go in user settings (synced). Workspace settings always win over user settings for that project.

Key Differences

ConcernVisual StudioVS Code
TypeScript/C# supportBuilt-in, always presentTypeScript language server (built-in), must configure
FormatterIDE-level settingPrettier extension + .prettierrc
LinterRoslyn analyzers (built-in)ESLint extension + eslint.config.mjs
DebuggerVisual Studio debugger (full featured)Node.js debugger via extension (powerful but requires config)
Test runnerTest Explorer (visual, built-in)Vitest extension or terminal
NuGet / packagesGUI + PM consoleTerminal only (pnpm add)
Team settings sharing.editorconfig (partial).vscode/settings.json (comprehensive)
Extension modelIDE plugins (heavier)Extensions (lighter, more composable)
Startup speedSlow (full IDE)Fast (text editor core)
Memory usageHigh (200MB-2GB typical)Lower (100-500MB typical)

Gotchas for .NET Engineers

Gotcha 1: Format on Save Requires the Right Default Formatter

After installing Prettier, editor.formatOnSave: true alone is not sufficient. VS Code must know which extension to use as the formatter. Without "editor.defaultFormatter": "esbenp.prettier-vscode" (or the per-language equivalent), VS Code may use the built-in TypeScript formatter instead of Prettier.

Symptoms: formatting does not match your .prettierrc config, or VS Code asks “Select a formatter” every time you save.

The fix is in .vscode/settings.json:

{
  "editor.defaultFormatter": "esbenp.prettier-vscode",
  "[typescript]": {
    "editor.defaultFormatter": "esbenp.prettier-vscode"
  }
}

The language-specific override ([typescript]) takes precedence over the global one. Set both to be safe.

Gotcha 2: ESLint Type-Checked Rules Require tsconfig.json in the Right Place

Some @typescript-eslint rules (including no-floating-promises) require the TypeScript compiler to provide type information. The ESLint extension needs to find your tsconfig.json. If your project has an unusual structure (tsconfig not at the workspace root, or a monorepo with multiple tsconfigs), the ESLint extension may silently skip type-checked rules.

Diagnosis: open the ESLint output panel (View → Output → ESLint) and look for errors referencing tsconfig.json or parserOptions.project.

Fix for monorepos — in each sub-package’s eslint.config.mjs:

{
  languageOptions: {
    parserOptions: {
      project: './tsconfig.json',  // Explicit relative path
      tsconfigRootDir: __dirname,
    },
  },
}

Gotcha 3: VS Code’s TypeScript Version vs. Your Project’s TypeScript Version

VS Code bundles its own version of TypeScript for the language server. Your project installs TypeScript as a dev dependency. These versions can differ — VS Code’s bundled version might be older or newer than your project’s.

This causes subtle issues: a feature available in TypeScript 5.5 might not show IntelliSense hints if VS Code’s bundled TypeScript is 5.3.

Fix: tell VS Code to use your project’s TypeScript version:

Cmd+Shift+P → "TypeScript: Select TypeScript Version"
→ "Use Workspace Version"

Or set it in .vscode/settings.json:

{
  "typescript.tsdk": "node_modules/typescript/lib"
}

This should be in every project’s .vscode/settings.json. It ensures every developer uses the same TypeScript version as the CI build.

Gotcha 4: Extensions Are Not Enabled in All Workspaces by Default

Some extensions ask whether they should be enabled globally or per-workspace when installed. Others are enabled globally by default. For security-sensitive extensions (like anything that reads your files), you may want per-workspace control.

If an extension is installed but not working in a specific project, check Cmd+Shift+P → “Extensions: Show Installed Extensions” and verify the extension is enabled for the current workspace, not just globally.

For the ESLint extension specifically: it must have a valid eslint.config.mjs (or legacy .eslintrc) in the project to show any errors. If the config file is missing, the extension is silently inactive.

Hands-On Exercise

Set up VS Code from scratch for a TypeScript project using only committed configuration files.

Step 1: Install the extensions

Run the code --install-extension commands from the Essential Extensions section above. After installing, verify each extension appears in the Extensions panel.

Step 2: Create the workspace configuration files

In a TypeScript project you are working on, create:

.vscode/
├── settings.json      (from the template above)
├── extensions.json    (from the template above)
├── launch.json        (from the appropriate template for NestJS or Next.js)
└── tasks.json         (from the template above)

Commit these four files. Note that .vscode/ is typically not in .gitignore — it should be committed for team standardization.

Step 3: Verify the configuration

Open a TypeScript file and:

  1. Introduce a deliberate formatting error (wrong indentation). Save the file. Prettier should fix it.
  2. Introduce a deliberate ESLint violation (e.g., an unused variable). Verify the red underline appears inline (Error Lens).
  3. Set a breakpoint in a function and press F5. Verify the debugger stops at the breakpoint.

Step 4: Test the task runner

Ctrl+Shift+P → “Tasks: Run Task” → run pnpm: test. Verify the test output appears in the terminal panel.

Step 5: Share with the team

In the project’s README, add a one-line note that .vscode/ is committed and what it provides. Paste the code --install-extension commands into your CONTRIBUTING.md or project README so new team members can get set up in one step.

Quick Reference

Files to Commit to Every TypeScript Project

FilePurpose
.vscode/settings.jsonFormatting, ESLint, TypeScript preferences
.vscode/extensions.jsonExtension recommendations
.vscode/launch.jsonDebug configurations
.vscode/tasks.jsonRunnable tasks (build, test, migrate)
.prettierrcPrettier formatting rules
eslint.config.mjsESLint rule configuration
tsconfig.jsonTypeScript compiler options

Essential Settings Quick Reference

{
  "editor.formatOnSave": true,
  "editor.defaultFormatter": "esbenp.prettier-vscode",
  "editor.codeActionsOnSave": { "source.fixAll.eslint": "explicit" },
  "typescript.tsdk": "node_modules/typescript/lib",
  "typescript.updateImportsOnFileMove.enabled": "always"
}

Extension ID Reference

ExtensionID
ESLintdbaeumer.vscode-eslint
Prettieresbenp.prettier-vscode
Error Lensusernamehw.errorlens
GitLenseamodio.gitlens
GitHub Copilotgithub.copilot
GitHub Copilot Chatgithub.copilot-chat
GitHub Pull Requestsgithub.vscode-pull-request-github
Path IntelliSensechristian-kohler.path-intellisense
Prismaprisma.prisma
Vue - Officialvue.volar
Tailwind CSS IntelliSensebradlc.vscode-tailwindcss
ES7+ React Snippetsdsznajder.es7-react-js-snippets

Further Reading

Code Review Practices: PR Culture in GitHub

For .NET engineers who know: Azure DevOps pull requests — branch policies, required reviewers, work item linking, build validation You’ll learn: How GitHub’s PR culture differs from Azure DevOps, what a strong PR description and review look like in our team, and the mechanics of GitHub’s review interface Time: 10-15 min read

The .NET Way (What You Already Know)

Azure DevOps pull requests are well-integrated with the broader DevOps ecosystem. Work item linking connects PRs to user stories and bugs. Branch policies enforce reviewer requirements, build validation, and merge strategies. The review UI supports threaded comments and a diff view.

The PR process in many .NET shops is driven by policy: PRs are created because the branch policy requires them, reviewed because the required-reviewer policy mandates it, and merged when all checks pass. The culture tends toward “process compliance” — the PR exists to satisfy the gate.

GitHub’s PR culture grew differently. PRs originated in open-source projects where authors needed to persuade maintainers that a contribution was worth including. The description needed to explain the problem being solved and why this solution was correct. The reviewer needed to understand the change, not just approve it. This culture transferred into GitHub-hosted private repositories, including ours.

The same mechanics exist in both systems. The culture and expectations around them differ.

The GitHub Way

What Makes a Good PR Description

A good PR description does three things:

  1. Explains what problem is being solved (and links to the issue or ticket)
  2. Describes the approach taken and any alternatives considered
  3. Tells the reviewer what to focus on

This matters because reviewers are context-switching from their own work. A description that explains the “why” means the reviewer spends less time reconstructing your reasoning and more time finding actual problems.

Our PR template — save this as .github/PULL_REQUEST_TEMPLATE.md in your repository:

## What and Why

<!-- What does this PR do, and why is it needed?
     Link the issue or ticket: Closes #123 -->

## Approach

<!-- How did you solve it? Why this approach over alternatives?
     Skip if the implementation is straightforward. -->

## Testing

<!-- How was this tested? What should reviewers verify? -->
- [ ] Unit tests added/updated
- [ ] Tested locally with dev server
- [ ] Edge cases considered: <!-- list them -->

## Review Notes

<!-- What should reviewers focus on?
     Flag any areas where you are uncertain. -->

## Screenshots (if UI changes)

<!-- Before / after for any visible changes -->

The Closes #123 syntax automatically closes the linked GitHub issue when the PR is merged. Use Fixes #123 for bugs and Closes #123 for features.

Good PR description example:

## What and Why

Adds invoice PDF generation when an invoice is marked as sent.
Closes #247.

Previously, clicking "Send Invoice" sent the email but attached no document.
Customers were calling in to request PDF copies.

## Approach

Uses Puppeteer to render the invoice HTML template to PDF server-side.
Considered PDFKit (programmatic generation) but the invoice template already
exists as a Handlebars HTML template — rendering it was significantly less
work than recreating the layout with PDFKit primitives.

PDF generation is synchronous in this implementation (acceptable for invoice
sizes we expect). Left a TODO for async/queue approach if generation time
becomes a problem at scale.

## Testing

- Unit tests: `invoice-pdf.service.test.ts` covers the template rendering
  and page size options.
- Manual: Sent a test invoice from the staging environment and verified
  the PDF attachment opened correctly in Gmail and Outlook.
- Edge case: invoices with 0 line items generate a valid (mostly empty) PDF.

## Review Notes

The Puppeteer setup in `src/lib/browser.ts` is the most likely area to need
adjustment — specifically the `executablePath` and launch args, which differ
between local macOS and the Linux CI environment. Flagging in case anyone
has run into this pattern before.

Poor PR description example:

## What and Why

PDF generation.

## Testing

Tested locally.

The second version tells the reviewer nothing. They will spend 10 minutes reading code before they can form an opinion.

PR Size Guidelines

Small, focused PRs are a discipline, not a convenience. They are easier to review, easier to revert, and easier to reason about in git history.

PR SizeLines ChangedTypical Scope
Small< 200 linesSingle feature, single bug fix
Medium200-500 linesFeature with tests, multi-layer change
Large500+ linesConsider splitting
Avoid1000+ linesAlmost always splittable

When a PR gets large, the review quality drops. Reviewers spend more time navigating the diff and less time thinking about correctness. The effective approval rate approaches rubber-stamping.

How to keep PRs small:

  • Separate refactoring from feature changes. If you need to rename a service before adding a feature, do it in a separate PR (one commit, easy to review, zero risk).
  • Separate migrations from the feature that uses them. The migration PR has no logic — it is easy to review. The feature PR does not include schema noise.
  • If a PR touches more than three feature areas, ask whether it should be multiple PRs.

The one exception: PRs that are large because they add a lot of tests are fine. Test-heavy PRs are low-risk even when large.

The Review Interface

GitHub’s review UI has several features that Azure DevOps also has, but the interaction patterns differ.

Leaving a comment:

Click any line in the diff to add a single-line comment, or click and drag to select multiple lines. GitHub groups comments into a “review” — you can leave multiple comments and then submit them all at once as “Approve,” “Request changes,” or “Comment.”

This is different from Azure DevOps, where each comment is submitted independently. In GitHub, draft all your comments first, then submit the review with a verdict. This gives the author a single notification instead of one per comment.

Suggesting changes:

The most powerful review feature: click the suggestion icon (looks like a document with a plus) when leaving a comment to propose an exact replacement. The author can accept the suggestion with one click, and GitHub commits it on their behalf.

<!-- In a review comment, click "Add a suggestion" -->
Suggested change:
```suggestion
const result = await this.ordersService.findOne(id);
if (!result) {
  throw new NotFoundException(`Order #${id} not found`);
}
return result;

Suggestions are ideal for small, unambiguous fixes — typos, naming inconsistencies, missing null checks. Use prose comments for anything that requires judgment or discussion.

**Resolving threads:**

When the author addresses a comment (either by making the change or by explaining why they did not), they click "Resolve conversation." This collapses the thread and makes it easy to track which feedback has been addressed. As a reviewer, do not resolve threads yourself unless the author asked you to — it should be the author's action to indicate "I've handled this."

**Review states:**

- **Comment** — feedback only, no verdict on merge readiness
- **Approve** — you are satisfied; the PR is ready to merge (pending other required reviewers and CI)
- **Request changes** — the PR has problems that must be addressed before merge; you will need to re-review and approve afterward

Use "Request changes" sparingly. It is appropriate for correctness problems, security issues, and architectural concerns. For style suggestions or minor improvements, "Comment" is appropriate — you are not blocking the merge, just offering ideas.

### CODEOWNERS

The `.github/CODEOWNERS` file maps file paths to GitHub users or teams who are automatically added as required reviewers when those files change.

.github/CODEOWNERS

Default owners for everything not matched below

  •                       @your-org/senior-engineers
    

Database migrations require a senior engineer

prisma/migrations/ @your-org/database-owners

CI/CD configuration changes require DevOps team approval

.github/workflows/ @your-org/devops

Auth-related code requires security review

src/auth/ @your-org/security-reviewers src/middleware/auth*.ts @your-org/security-reviewers

Frontend package changes require frontend lead

apps/web/ @your-org/frontend-lead


CODEOWNERS is enforced via branch protection rules. When a PR changes a file matching a pattern, the associated owner is added as a required reviewer — the PR cannot merge until they approve.

This is the equivalent of Azure DevOps's "Required reviewers by file path" branch policy.

### Branch Protection Rules

Configure branch protection for `main` in GitHub Settings → Branches → Branch protection rules:

Branch name pattern: main

✓ Require a pull request before merging ✓ Require approvals: 1 (or 2 for larger teams) ✓ Dismiss stale pull request approvals when new commits are pushed ✓ Require review from Code Owners

✓ Require status checks to pass before merging Required checks: - ci / lint - ci / type-check - ci / test ✓ Require branches to be up to date before merging

✓ Require conversation resolution before merging

✗ Allow force pushes (leave disabled) ✗ Allow deletions (leave disabled)


The "Require branches to be up to date" rule requires contributors to merge main into their branch before merging. This prevents the scenario where two PRs individually pass CI but conflict when both merge to main.

### Squash vs. Merge Commits

GitHub offers three merge strategies: merge commit, squash merge, and rebase merge. We use **squash merge**.

| Strategy | Result in main's history |
|---|---|
| Merge commit | All individual commits from the branch preserved, plus a merge commit |
| Squash merge | All commits from the branch collapsed into one commit on main |
| Rebase merge | All commits replayed onto main (no merge commit, all individual commits kept) |

**Why squash merge:**

- `main`'s git history stays clean — one commit per PR, each with the PR title and number
- Individual "WIP", "fix typo", "address review feedback" commits don't pollute the log
- `git log --oneline` on `main` reads like a changelog
- `git bisect` and `git blame` are more useful on a clean history

```bash
# What main's history looks like with squash merges:
# a1b2c3d feat: add invoice PDF generation (#247)
# d4e5f6g fix: correct webhook retry backoff logic (#231)
# g7h8i9j feat: add customer notification preferences (#219)

# Vs. with merge commits:
# a1b2c3d Merge pull request #247 from feat/invoice-pdf
# b2c3d4e Address PR feedback - fix undefined case
# c3d4e5f WIP: pdf generation working locally
# d4e5f6g Merge pull request #231 from fix/webhook-retry
# ...

Configure squash merge as the only option in GitHub Settings → General → Pull Requests:

✓ Allow squash merging — Default commit message: "Pull request title and description"
✗ Allow merge commits
✗ Allow rebase merging

Handling Review Feedback

When you receive “Request changes” feedback:

  1. Read all comments before making any changes. Some comments may be related, and understanding the full picture avoids making changes that conflict.
  2. Address each comment — either make the change, or reply with a clear explanation of why you did not.
  3. Mark each addressed thread as resolved (click “Resolve conversation”).
  4. Push the updated commits. GitHub will notify the reviewers.
  5. Do not re-request review until all threads are resolved — reviewers should not have to re-check whether you addressed earlier feedback.

When you are the reviewer and the author pushes changes in response to your feedback:

  1. Review the new commits, not the entire diff again. GitHub’s “Files changed since your last review” view helps.
  2. Resolve threads that were addressed satisfactorily. Leave comments if changes raise new issues.
  3. If all “Request changes” issues are resolved, change your review to “Approve.”

Team Norms for Turnaround Time

Our expectations:

  • First review response: within one business day of a PR being opened.
  • Follow-up reviews (after author pushes changes): same day if the PR was already in review.
  • Blocking reviews (“Request changes”): the reviewer is responsible for staying engaged until the PR merges or is closed. A review that blocks a PR and then goes silent is worse than no review.

If a PR is open and you cannot review it within one business day, leave a “Comment” acknowledging it and give an estimated timeline. This is basic courtesy that significantly reduces context-switching costs for the author.

Review checklist — what to look for:

## Correctness
- [ ] Does the code do what the PR description says it does?
- [ ] Are there missing null checks or unhandled edge cases?
- [ ] Is error handling consistent with the rest of the codebase?
- [ ] Are async operations awaited everywhere they should be?

## Security
- [ ] No secrets or credentials in code
- [ ] Input validated before use in queries or commands
- [ ] Authorization checks present on new endpoints

## Design
- [ ] Does this fit the existing architecture patterns?
- [ ] Is the code in the right layer (controller vs. service vs. repository)?
- [ ] Is there anything that will be hard to change later?

## Tests
- [ ] Are tests present for the new logic?
- [ ] Do tests cover the error cases, not just the happy path?
- [ ] Are test assertions specific enough to catch regressions?

## Operational
- [ ] Are new environment variables documented?
- [ ] Does this require a migration? Is it included?
- [ ] Are there logging statements for significant operations?

Key Differences

ConcernAzure DevOpsGitHub
PR templateNot built-in (wiki or manual).github/PULL_REQUEST_TEMPLATE.md
Required reviewers by pathBranch policy: required reviewers.github/CODEOWNERS
Review statesApprove / Wait for authorApprove / Comment / Request changes
Suggesting specific changesComment onlyInline suggestions (one-click accept)
Work item linkingNative (links to Azure Boards)Via commit message convention (Closes #123)
Merge strategiesMerge, squash, rebase (configurable)Merge, squash, rebase (configurable)
Branch protectionBranch policies (UI-configured)Branch protection rules (UI-configured)
CI integrationAzure Pipelines (native)GitHub Actions (native)
Resolving threadsAuthor and reviewer both canConvention: author resolves
PR review historyNot preserved after mergePreserved forever in the PR thread

Gotchas for .NET Engineers

Gotcha 1: “Approve” in GitHub Does Not Always Allow Merge

In Azure DevOps, an approval from a required reviewer is typically sufficient to unlock the merge button (combined with a passing build). In GitHub, several independent checks must all pass simultaneously:

  • Required number of approvals
  • No “Request changes” reviews outstanding (even from non-required reviewers by default)
  • All required status checks passing
  • Branch up to date with base branch (if configured)
  • All conversations resolved (if configured)

The most common confusion: a reviewer who submitted “Request changes” previously must either re-approve or dismiss their own review before the PR can merge — even if the author addressed all the feedback. The author cannot dismiss another reviewer’s “Request changes” review by themselves (unless they have admin access).

If a PR is stuck despite looking like it should be mergeable, check the PR’s “Checks” section and the review status carefully. GitHub shows exactly which conditions are blocking the merge.

Gotcha 2: Force-Pushing to a PR Branch Discards Review History

Azure DevOps handles amended commits more gracefully. In GitHub, if you force-push to a branch (rewriting commits), any review comments attached to specific lines in the old commits become “outdated” and may lose their thread context. This makes it difficult for reviewers to see whether their feedback was addressed.

The rule: after opening a PR, only add new commits. Do not git commit --amend and git push --force to clean up your commit history. The PR branch is a shared record of the review process. Clean up the history happens automatically during the squash merge.

If you absolutely must force-push (e.g., to remove a accidentally committed secret), communicate it to the reviewer so they can re-review the changed commits.

Gotcha 3: GitHub Actions CI Status Must Be Configured — It Is Not Automatic

Azure DevOps build validation automatically runs the pipeline when a PR is opened. In GitHub, the CI workflow must be configured to run on pull_request events, and it must be added to the branch protection rules as a required status check.

If CI is not required, reviewers cannot rely on “all checks pass” as a signal that the code is correct. The setup required in your GitHub Actions workflow:

# .github/workflows/ci.yml
on:
  push:
    branches: [main]
  pull_request:               # This line is required for PR checks
    branches: [main]

And in branch protection:

Require status checks to pass:
  Required checks: ci / lint, ci / type-check, ci / test

If a check does not appear in the required checks dropdown, it has never run in a PR against that branch. Merge a small PR with the CI workflow configured first, then add the checks to branch protection.

Gotcha 4: Small PRs Feel Slower But Are Faster

.NET engineers accustomed to large feature branches (“I’ll do the whole story in one PR”) often resist breaking work into smaller PRs because it feels like more overhead. In practice, the opposite is true:

  • A 150-line PR gets reviewed same day. A 800-line PR waits days for someone to find time.
  • A small PR’s review takes 15 minutes. A large PR’s review takes an hour and still misses things.
  • When a small PR has a bug, the cause is obvious. When a large PR has a bug, the investigation is significant.

The discipline is in the decomposition: learn to identify the smallest shippable unit of a feature and ship that first. The database migration, then the API endpoint, then the UI — not all three together.

Hands-On Exercise

Practice the complete PR workflow with a non-trivial change.

Step 1: Set up the repository infrastructure

In a repository you have admin access to, configure:

  1. Create .github/PULL_REQUEST_TEMPLATE.md using the template from this article
  2. Create .github/CODEOWNERS with at least one path-based rule
  3. Configure branch protection on main with required reviews and required CI checks

Step 2: Open a PR using the full workflow

Create a feature branch, make a meaningful change (not a typo fix — something with at least 50 lines of new code), and open a PR:

  1. Fill out the PR template completely
  2. Make sure the description answers: what, why, approach, and what to review
  3. Add yourself as a reviewer (or ask a team member)

Step 3: Practice the review workflow

As the reviewer (or ask a team member to review), practice:

  1. Leave at least one inline suggestion using the suggestion feature
  2. Leave at least one comment asking for clarification or raising a concern
  3. Submit the review as “Request changes” (even if the changes are minor)

As the author, respond:

  1. Accept the suggestion with one click
  2. Address the comment with code changes or a reply explaining why not
  3. Resolve the threads
  4. Push the updated commits

Step 4: Verify the merge requirements

Confirm that all required checks pass and the PR meets all branch protection conditions before merging. Merge using squash merge.

After merging, run:

git log --oneline main -5

Confirm the squash merge produced a single clean commit with the PR title.

Quick Reference

PR Template (save as .github/PULL_REQUEST_TEMPLATE.md)

## What and Why
<!-- What does this PR do, and why? Closes #[issue] -->

## Approach
<!-- How? Why this over alternatives? -->

## Testing
- [ ] Unit tests added/updated
- [ ] Tested locally
- [ ] Edge cases: <!-- list -->

## Review Notes
<!-- What should reviewers focus on? Where are you uncertain? -->

Review Checklist

CategoryCheck
CorrectnessLogic matches PR description
CorrectnessNull/undefined edge cases handled
CorrectnessAsync operations awaited
SecurityNo hardcoded secrets
SecurityInput validated before DB/shell use
SecurityAuth checks on new endpoints
DesignFits existing architecture patterns
TestsCover error paths, not just happy path
OperationalEnvironment variables documented
OperationalMigrations included if schema changed

GitHub vs. Azure DevOps Quick Map

Azure DevOpsGitHub
Branch policy: required reviewers.github/CODEOWNERS + branch protection
Branch policy: build validationGitHub Actions on pull_request + required status checks
Required reviewers by path.github/CODEOWNERS
ApproveApprove
Wait for authorRequest changes
PR work item linkCloses #123 in description
Abandon PRClose PR

Key Branch Protection Settings for main

✓ Require pull request before merging
  ✓ Require approvals: 1
  ✓ Dismiss stale approvals on new commits
  ✓ Require review from Code Owners
✓ Require status checks to pass
  ✓ Require branches to be up to date
  (add your CI job names here)
✓ Require conversation resolution before merging
✗ Allow force pushes
✗ Allow deletions

Further Reading

Debugging Production Issues: The Node.js Playbook

For .NET engineers who know: Attaching the Visual Studio debugger to a running process, capturing memory dumps with ProcDump or dotMemory, reading Application Insights live metrics, and using structured logs in Azure Monitor You’ll learn: How production debugging works in a Node.js stack — a fundamentally different model from the attach-and-step approach you’re used to Time: 15-20 min read


The .NET Way (What You Already Know)

In .NET production debugging, you have several overlapping tools. When something goes wrong, you can attach the Visual Studio debugger to the running IIS or Kestrel process, set breakpoints, and inspect live state — often without restarting. For deeper issues, you capture a memory dump with ProcDump, pull it into WinDbg or Visual Studio, and examine heap objects, thread stacks, and GC roots. Application Insights gives you distributed traces, dependency call timing, live metrics, and a query language (KQL) for slicing logs by request, exception type, or custom dimension.

The core assumption in .NET production debugging is that you can reconstruct program state after the fact. A memory dump is a snapshot of the entire heap. A call stack tells you exactly which thread was doing what. The CLR’s type system survives into the dump — you can ask WinDbg “show me every Order object on the heap” and it works.

That assumption does not transfer to Node.js.


The Node.js Way

Why the Debugging Model Is Different

Node.js is a single-threaded runtime executing on the V8 engine. There is no concept of “attaching to the process and pausing execution” in production — doing so would block all request handling for every user. There are no native memory dump utilities comparable to ProcDump. There is no built-in equivalent to Application Insights that ships with the runtime.

What Node.js production debugging gives you instead:

  1. Structured logs — your primary source of truth for what happened
  2. Error tracking with stack traces — Sentry captures exceptions with source-mapped stacks
  3. Heap snapshots — V8’s heap profiler, accessed via --inspect or programmatic APIs
  4. CPU profiles — V8’s sampling profiler for identifying hot code paths
  5. Process metrics — memory usage, event loop lag, handle counts

The mental shift: in .NET, you debug by reconstructing state. In Node.js, you debug by reading what the code logged before it broke. This is closer to how you approach distributed systems debugging — trace IDs, log correlation, structured events — than traditional debugger usage.

Sentry: Your Application Insights Equivalent

Sentry is the primary error tracking tool. Install it early; retrofitting it later is painful.

pnpm add @sentry/node @sentry/profiling-node
// src/instrument.ts — load this FIRST, before any other imports
import * as Sentry from '@sentry/node';
import { nodeProfilingIntegration } from '@sentry/profiling-node';

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  environment: process.env.NODE_ENV,
  release: process.env.RENDER_GIT_COMMIT, // Render sets this automatically
  integrations: [
    nodeProfilingIntegration(),
  ],
  tracesSampleRate: process.env.NODE_ENV === 'production' ? 0.1 : 1.0,
  profilesSampleRate: 1.0,
});
// src/main.ts — instrument.ts must be the very first import
import './instrument';
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  await app.listen(process.env.PORT ?? 3000);
}
bootstrap();

In NestJS, add the Sentry exception filter to capture unhandled errors:

// src/filters/sentry-exception.filter.ts
import { Catch, ArgumentsHost, HttpException } from '@nestjs/common';
import { BaseExceptionFilter } from '@nestjs/core';
import * as Sentry from '@sentry/node';

@Catch()
export class SentryExceptionFilter extends BaseExceptionFilter {
  catch(exception: unknown, host: ArgumentsHost) {
    // Only capture unexpected errors; don't report 4xx HTTP exceptions
    if (!(exception instanceof HttpException)) {
      Sentry.captureException(exception);
    }
    super.catch(exception, host);
  }
}
// src/main.ts
const { httpAdapter } = app.get(HttpAdapterHost);
app.useGlobalFilters(new SentryExceptionFilter(httpAdapter));

Sentry Error Triage Workflow

When an alert fires or a user reports an error:

  1. Open the Sentry issue. The issue page groups all occurrences of the same error by stack trace fingerprint — equivalent to Application Insights’ “exceptions by problem ID.”

  2. Read the breadcrumbs. Sentry automatically records HTTP requests, console logs, and database queries in the 60 seconds before the error. This is your timeline of what the process was doing.

  3. Check the source-mapped stack trace. If source maps are configured correctly, you see TypeScript line numbers, not compiled JavaScript. If you see minified code (at e (index.js:1:4823)), source maps are not uploaded — fix this before debugging further.

  4. Check the user context and tags. Sentry shows which user triggered the error, what request they made, and what environment variables your code attached. Set meaningful context early:

// In your auth middleware or guard
Sentry.setUser({ id: user.id, email: user.email });
Sentry.setTag('tenant', user.tenantId);
  1. Check for similar events. The “Events” tab shows every occurrence. Look for patterns: does it happen only for specific users, specific inputs, or specific times of day?

  2. Mark it as “In Progress” and assign it before you start fixing it.

Source Maps in Production

Without source maps, Sentry shows minified stack traces that are unreadable. There are two approaches.

Option A: Upload source maps during CI/CD (recommended)

# Install the Sentry CLI
pnpm add -D @sentry/cli

# In your CI/CD pipeline, after building:
npx sentry-cli releases new "$RENDER_GIT_COMMIT"
npx sentry-cli releases files "$RENDER_GIT_COMMIT" upload-sourcemaps ./dist
npx sentry-cli releases finalize "$RENDER_GIT_COMMIT"

Add this to your GitHub Actions workflow, not your production startup. Source maps contain your full source code — never expose them to the public.

Option B: Sentry Vite/webpack plugin (build-time upload)

// vite.config.ts (for Vite-based frontends)
import { sentryVitePlugin } from '@sentry/vite-plugin';

export default defineConfig({
  plugins: [
    sentryVitePlugin({
      authToken: process.env.SENTRY_AUTH_TOKEN,
      org: 'your-org',
      project: 'your-project',
    }),
  ],
  build: {
    sourcemap: true, // required
  },
});

For NestJS (built with tsc or webpack), configure source maps in tsconfig.json:

{
  "compilerOptions": {
    "sourceMap": true,
    "inlineSources": true
  }
}

Reading Node.js Stack Traces

A Node.js stack trace reads bottom-to-top, same as CLR stack traces. The error type and message appear first, then frames from innermost to outermost:

TypeError: Cannot read properties of undefined (reading 'id')
    at OrderService.getOrderTotal (order.service.ts:47:24)
    at OrdersController.getTotal (orders.controller.ts:23:38)
    at Layer.handle [as handle_request] (express/lib/router/layer.js:95:5)
    at next (express/lib/router/route.js:137:13)
    ...

Frame format: at [function] ([file]:[line]:[column]). The frames without recognizable file names (express/lib/router/layer.js) are framework internals — equivalent to ASP.NET Core’s middleware pipeline frames that you learn to skip past.

The key frames are the first ones with your application code. In the example above: order.service.ts:47 is where the error originated. orders.controller.ts:23 is what called it. Everything below those two frames is the framework plumbing.

Async stack traces are the Node.js-specific challenge. In synchronous .NET code, the call stack is linear. In async Node.js code, awaited calls create new microtask queue entries, which can lose stack context. Node.js 12+ preserves async stack traces via --async-context, but the output can be verbose:

at async OrderService.getOrderTotal (order.service.ts:47:24)
at async OrdersController.getTotal (orders.controller.ts:23:38)

If you see a stack trace that terminates at the async boundary and doesn’t show what called the async function, the caller did not await properly — a classic Node.js footgun.

Structured Log Analysis

Use a structured logger from the start. pino is the standard in NestJS projects.

pnpm add pino pino-pretty nestjs-pino
// src/app.module.ts
import { LoggerModule } from 'nestjs-pino';

@Module({
  imports: [
    LoggerModule.forRoot({
      pinoHttp: {
        level: process.env.NODE_ENV === 'production' ? 'info' : 'debug',
        transport: process.env.NODE_ENV !== 'production'
          ? { target: 'pino-pretty' }
          : undefined, // In production, output raw JSON for log aggregators
        serializers: {
          req: (req) => ({ method: req.method, url: req.url, id: req.id }),
          res: (res) => ({ statusCode: res.statusCode }),
        },
      },
    }),
  ],
})
export class AppModule {}

Production logs output as JSON, one object per line. Each log entry has at minimum:

  • level — numeric level (10=trace, 20=debug, 30=info, 40=warn, 50=error, 60=fatal)
  • time — Unix timestamp in milliseconds
  • msg — the log message
  • pid — process ID
  • Any custom fields you added

When triaging an incident, correlate logs using request IDs:

// Log meaningful context alongside the message
this.logger.log({
  msg: 'Order created',
  orderId: order.id,
  userId: user.id,
  durationMs: Date.now() - startTime,
});

Render Log Viewer

Render’s log viewer (your hosting platform) provides a web interface for live and historical logs. Key features:

  • Live tail — equivalent to kubectl logs -f or Azure’s Log Stream
  • Filter by text — basic grep-style filtering; for complex queries, export to your log aggregator
  • Log retention — Render retains logs for 7 days on free plans, longer on paid plans

For production incidents, use the Render log viewer to:

  1. Confirm the error is occurring (not a Sentry reporting issue)
  2. Correlate timestamps with deployment events
  3. Check for patterns before the error: unusual traffic volume, repeated retry storms

For serious analysis beyond basic text search, pipe logs to a proper aggregator. Render supports log drains (forwarding to Datadog, Papertrail, etc.). For simpler setups, the Render CLI can tail and export logs:

render logs --service YOUR_SERVICE_ID --tail

Memory Leak Detection

The symptoms: Node.js process memory grows steadily over hours or days. The application becomes slower, then starts throwing out-of-memory errors, or the hosting platform restarts it.

The cause: In Node.js, the most common memory leaks are:

  1. Closures holding references to large objects longer than expected
  2. Event listeners added but never removed
  3. Caches with no eviction policy
  4. Database connection pools not being released
  5. Global state accumulating data over time

Detection step 1: Monitor memory trend

Node.js exposes process.memoryUsage():

// Scheduled health check or metrics endpoint
const usage = process.memoryUsage();
this.logger.log({
  msg: 'Memory usage',
  heapUsed: Math.round(usage.heapUsed / 1024 / 1024) + 'MB',
  heapTotal: Math.round(usage.heapTotal / 1024 / 1024) + 'MB',
  rss: Math.round(usage.rss / 1024 / 1024) + 'MB',
  external: Math.round(usage.external / 1024 / 1024) + 'MB',
});

If heapUsed grows linearly over time without plateauing, you have a leak.

Detection step 2: Heap snapshot in staging

Never run --inspect in production (it opens a debugging port, is a security risk, and has performance overhead). Use staging:

# Start Node.js with the inspector open
node --inspect dist/main.js

# Or with NestJS:
pnpm start:debug

Open Chrome DevTools (chrome://inspect), connect to the Node.js process, go to the Memory tab, and take heap snapshots. Compare two snapshots taken minutes apart. The “Comparison” view shows which object types grew between snapshots — the same workflow as dotMemory’s “Object Sets” comparison.

Detection step 3: Programmatic heap snapshot

import { writeHeapSnapshot } from 'v8';
import { existsSync, mkdirSync } from 'fs';

// Trigger via HTTP endpoint (only in staging, never in production)
@Get('debug/heap-snapshot')
async takeHeapSnapshot() {
  if (process.env.NODE_ENV === 'production') {
    throw new ForbiddenException();
  }
  const dir = '/tmp/heapdumps';
  if (!existsSync(dir)) mkdirSync(dir);
  const filename = writeHeapSnapshot(dir);
  return { filename };
}

Open the .heapsnapshot file in Chrome DevTools Memory tab.

CPU Profiling

If the application is slow without obvious memory growth, CPU profiling identifies hot code paths. The approach is similar to dotTrace’s sampling profiler.

import { Session } from 'inspector';
import { writeFileSync } from 'fs';

// Staging-only endpoint
@Get('debug/cpu-profile')
async takeCpuProfile(@Query('durationMs') durationMs = '5000') {
  if (process.env.NODE_ENV === 'production') {
    throw new ForbiddenException();
  }
  const session = new Session();
  session.connect();

  await new Promise<void>((resolve) => {
    session.post('Profiler.enable', () => {
      session.post('Profiler.start', () => {
        setTimeout(() => {
          session.post('Profiler.stop', (err, { profile }) => {
            writeFileSync('/tmp/cpu-profile.cpuprofile', JSON.stringify(profile));
            resolve();
          });
        }, parseInt(durationMs));
      });
    });
  });

  return { filename: '/tmp/cpu-profile.cpuprofile' };
}

Load the .cpuprofile file in Chrome DevTools Performance tab. The flame chart shows which functions consumed the most CPU time. Functions wider in the flame chart are hotter.

Common Node.js Production Issues

Event loop blocking

This is the Node.js-specific issue with no direct .NET equivalent. If your code runs a synchronous operation that takes more than a few milliseconds, all other requests queue behind it. Common culprits:

  • JSON.parse() on very large payloads (megabytes)
  • Synchronous crypto operations
  • fs.readFileSync() in request handlers
  • Complex regular expressions on large strings (ReDoS)
  • Tight loops over large arrays

Detect event loop lag:

import { monitorEventLoopDelay } from 'perf_hooks';

const histogram = monitorEventLoopDelay({ resolution: 10 });
histogram.enable();

setInterval(() => {
  const lagMs = histogram.mean / 1e6; // Convert nanoseconds to milliseconds
  if (lagMs > 100) {
    this.logger.warn({ msg: 'Event loop lag detected', lagMs });
  }
  histogram.reset();
}, 10_000);

Unhandled promise rejections

In Node.js, an unhandled promise rejection that is not caught will crash the process in Node.js 15+ or silently succeed in older versions. Both behaviors are bad.

// In main.ts — catch what everything else misses
process.on('unhandledRejection', (reason, promise) => {
  logger.error({ msg: 'Unhandled promise rejection', reason });
  Sentry.captureException(reason);
  // Don't crash immediately — give Sentry time to flush
  setTimeout(() => process.exit(1), 1000);
});

process.on('uncaughtException', (error) => {
  logger.fatal({ msg: 'Uncaught exception', error });
  Sentry.captureException(error);
  setTimeout(() => process.exit(1), 1000);
});

Memory leaks from closures

// Leak: each request registers an event listener that holds a closure
// referencing the response object — it is never removed
app.get('/events', (req, res) => {
  const handler = (data) => res.write(data); // closure over res
  emitter.on('data', handler);
  // Bug: emitter.off('data', handler) is never called when req closes
});

// Fix: clean up listeners when the connection closes
app.get('/events', (req, res) => {
  const handler = (data) => res.write(data);
  emitter.on('data', handler);
  req.on('close', () => emitter.off('data', handler)); // cleanup
});

Connection pool exhaustion

Prisma, pg, and most database clients maintain a connection pool. If queries hold connections longer than the timeout, new queries queue indefinitely.

// Check pool status in health endpoint
@Get('health')
async health() {
  const result = await this.prisma.$queryRaw`SELECT 1 as alive`;
  return { status: 'ok', db: result };
}

If the health check times out but the process is otherwise running, suspect pool exhaustion. Check for long-running transactions or missing await on Prisma calls (a query without await starts but is never awaited, holding a connection until GC).


Key Differences

.NET ApproachNode.js Approach
Attach debugger to live processRead logs and Sentry; use staging for interactive debug
Memory dump + WinDbg/dotMemoryHeap snapshot via V8 inspector in staging
Thread-per-request modelSingle-threaded; event loop lag is the unique failure mode
Application Insights built-inSentry + structured logs (pino) assembled manually
CLR type information in dumpsHeap snapshots show object shapes but not TypeScript types
Thread stacks show concurrent workSingle thread; “concurrent” issues are async sequencing bugs
ConfigureAwait(false) mattersNo thread switching; all code runs on the event loop thread
Background threads for CPU workCPU work blocks the event loop; offload to worker threads or external processes

Gotchas for .NET Engineers

Gotcha 1: Source maps must be explicitly configured or Sentry is useless. In .NET, Application Insights records stack traces against your PDB symbols automatically. In Node.js, Sentry receives minified JavaScript stack traces. Until you configure source map upload in your CI/CD pipeline, every Sentry error will show at e (index.js:1:4823). Set this up before your first deployment, not after the first production incident.

Gotcha 2: --inspect in production is a security hole. The Node.js --inspect flag opens a WebSocket debugging port. If that port is exposed — even accidentally — an attacker can execute arbitrary code in your process. Never pass --inspect to production Node.js. Use it only locally or in isolated staging environments with network-level access controls. If you see --inspect in a production Dockerfile or a hosting platform’s start command, treat it as a critical security finding.

Gotcha 3: Synchronous operations in async code block every user. A .NET developer’s instinct is that synchronous work only blocks the current thread. In Node.js, synchronous work blocks the event loop, which is the only thread handling all requests. JSON.parse() on a 10MB payload, fs.readFileSync() in a route handler, or a CPU-intensive loop in a service method will stall every in-flight request until it completes. The fix is to either move the work to a Worker thread (Node.js built-in) or break it into smaller async chunks.

Gotcha 4: Unhandled promise rejections are silent data loss in older code. In pre-Node.js 15 codebases, a rejected promise without a .catch() handler silently discards the error. If you see code like someAsyncFunction() without await or .catch(), and that code can reject, you have silent failures. Always await async calls in request handlers. Always add process.on('unhandledRejection') as a safety net.

Gotcha 5: Memory profiling requires staging access, not production. In .NET, you can often take a memory dump from a production server (pausing one process on a multi-instance deployment). In Node.js on Render, you do not have direct process access. Plan your debugging workflow before you need it: have a staging environment with --inspect accessible, have a process for reproducing production traffic in staging, and have heap snapshot endpoints (protected) deployed in staging at all times.


Hands-On Exercise

Set up the full observability stack for a NestJS application and trigger a real error through each layer.

Part 1: Sentry integration

  1. Create a free Sentry account at sentry.io, create a Node.js project, and copy the DSN.
  2. Install @sentry/node and initialize it as the first import in main.ts.
  3. Add the SentryExceptionFilter as shown above.
  4. Throw a deliberate error in a route handler: throw new Error('Test Sentry integration').
  5. Call the endpoint. Verify the error appears in Sentry with a readable stack trace.
  6. If the stack trace shows minified code, configure sourceMap: true in tsconfig.json and redeploy.

Part 2: Structured logging

  1. Install pino and nestjs-pino.
  2. Configure LoggerModule with JSON output for production and pretty output for development.
  3. Replace all console.log calls in one service with this.logger.log({ msg: '...', ...context }).
  4. Run the application and verify log output is structured JSON.

Part 3: Memory monitoring

  1. Add a /health endpoint that returns process.memoryUsage() formatted as MB.
  2. Add a memory warning log that fires when heapUsed exceeds 80% of heapTotal.
  3. Write a test that calls the endpoint repeatedly in a loop and watches for memory growth.

Part 4: Event loop monitoring

  1. Add monitorEventLoopDelay from perf_hooks as shown above.
  2. Write a route that performs a CPU-intensive synchronous operation (e.g., summing 10 million numbers in a loop).
  3. Call that route while also calling a fast route in parallel. Observe that the fast route is delayed.
  4. Log the event loop lag before and after the slow call.

Quick Reference

Observability Stack

NeedTool.NET Equivalent
Error trackingSentryApplication Insights Exceptions
Structured logspino + nestjs-pinoSerilog + Application Insights
Log viewer (hosted)Render log viewerAzure Log Stream
Heap profilingV8 inspector (Chrome DevTools)dotMemory / WinDbg
CPU profilingV8 profiler (.cpuprofile)dotTrace
Event loop lagperf_hooks.monitorEventLoopDelayNo direct equivalent
Process metricsprocess.memoryUsage()Azure Monitor / Kusto

Incident Response Runbook

1. CHECK SENTRY
   - Is there a new error group with rising occurrences?
   - Is the stack trace readable (source maps working)?
   - What are the breadcrumbs showing?

2. CHECK RENDER LOGS
   - Is the process restarting? (look for process exit logs)
   - Is there a burst of errors at a specific timestamp?
   - Does the error correlate with a deployment?

3. CHECK MEMORY
   GET /health -> heapUsed
   If growing: take heap snapshot in staging, compare object counts

4. CHECK EVENT LOOP
   - Is response latency elevated across ALL endpoints? (event loop blocking)
   - Is it only specific endpoints? (slow query, external API)

5. ROLLBACK IF NEEDED
   Render dashboard -> Deploys -> Rollback to previous deploy
   This takes ~60 seconds; do it immediately if a deploy caused the issue

6. DOCUMENT
   - What was the error?
   - What was the root cause?
   - What was the fix?
   - Add to runbook if it could recur

Common Production Issues and Fixes

SymptomLikely CauseInvestigation
All endpoints slow simultaneouslyEvent loop blockingCheck event loop lag metric; look for sync operations in hot paths
Memory grows without boundClosure leak / listener leakHeap snapshot comparison in staging
Process crashes with OOMMemory leak reached limitHeap snapshot before crash; check Render restart logs
UnhandledPromiseRejection in logsMissing await somewhereSearch codebase for async calls without await; add global handler
500 errors on specific endpointUncaught exception in handlerCheck Sentry for that endpoint; verify error handling
DB queries timing outPool exhaustionCheck for missing await on Prisma; look for long transactions
Sentry shows minified tracesSource maps not uploadedConfigure source map upload in CI; verify sourceMap: true in tsconfig

Key Commands

# Run with inspector open (staging only)
node --inspect dist/main.js

# Take a heap snapshot to a file programmatically
node -e "const v8 = require('v8'); v8.writeHeapSnapshot();"

# Check Render logs via CLI
render logs --service SERVICE_ID --tail

# Check memory usage of running process
kill -USR2 <PID>  # triggers V8 heap dump if configured

Sentry Configuration Checklist

  • SENTRY_DSN set in environment variables
  • instrument.ts imported before all other imports in main.ts
  • SentryExceptionFilter registered globally
  • Source maps configured (sourceMap: true, inlineSources: true in tsconfig)
  • Source maps uploaded in CI/CD pipeline (not at runtime)
  • release set to git commit SHA for version tracking
  • environment set to production / staging / development
  • tracesSampleRate reduced in production (0.05–0.1 for high-traffic services)

Further Reading

Feature Flags and Progressive Rollouts

For .NET engineers who know: Azure App Configuration feature flags, LaunchDarkly SDK for .NET, IFeatureManager from Microsoft.FeatureManagement, and basic A/B testing concepts You’ll learn: How to implement feature flags across the JS/TS stack — from simple environment variable flags to full targeting rules — and when to graduate from one level to the next Time: 10-15 min read


The .NET Way (What You Already Know)

In the Microsoft ecosystem, feature flags usually live in Azure App Configuration. You reference Microsoft.FeatureManagement in your project, configure the IFeatureManager service, and toggle flags in the Azure Portal without a code deployment. Flags can have simple on/off state or targeting rules (percentage rollout, user group, A/B split). The [FeatureGate("MyFeature")] action filter keeps flag-protected routes clean.

// Startup.cs
builder.Services.AddAzureAppConfiguration();
builder.Services.AddFeatureManagement();

// Controller usage
[FeatureGate("BetaCheckout")]
[HttpGet("checkout-v2")]
public async Task<IActionResult> CheckoutV2() { ... }

// Service usage
public class OrderService
{
    private readonly IFeatureManager _features;

    public async Task<decimal> CalculateTax(Order order)
    {
        if (await _features.IsEnabledAsync("NewTaxEngine"))
            return await _newTaxEngine.Calculate(order);
        return await _legacyTaxEngine.Calculate(order);
    }
}

LaunchDarkly provides a more sophisticated version of this with real-time flag delivery, user targeting, A/B testing analytics, and kill switches — but the usage pattern is similar.

The mental model: feature flags are a runtime configuration system that sits above your code and lets you control execution paths without deployments.


The JS/TS Way

The JS/TS ecosystem has no single standard equivalent to Microsoft.FeatureManagement. The good news: the concept translates directly, and there are options at every complexity level. The bad news: you have to choose and integrate the tool yourself.

Here is the progression, from simplest to most powerful.

Level 1: Environment Variable Flags

The starting point for most features. Zero dependencies, works immediately, and forces you to define what “on” and “off” means before adding targeting complexity.

// src/config/features.ts
export const features = {
  betaCheckout: process.env.FEATURE_BETA_CHECKOUT === 'true',
  newTaxEngine: process.env.FEATURE_NEW_TAX_ENGINE === 'true',
  aiSuggestions: process.env.FEATURE_AI_SUGGESTIONS === 'true',
} as const;

export type FeatureName = keyof typeof features;
// In a NestJS service
import { features } from '../config/features';

@Injectable()
export class OrderService {
  async calculateTax(order: Order): Promise<number> {
    if (features.newTaxEngine) {
      return this.newTaxEngine.calculate(order);
    }
    return this.legacyTaxEngine.calculate(order);
  }
}

In React or Vue, import the same config:

// src/components/Checkout.tsx
import { features } from '@/config/features';

export function Checkout() {
  return (
    <div>
      {features.betaCheckout ? <CheckoutV2 /> : <CheckoutV1 />}
    </div>
  );
}

Set flags in your hosting platform’s environment variables. On Render:

FEATURE_BETA_CHECKOUT=true
FEATURE_NEW_TAX_ENGINE=false

Changing a flag requires updating the environment variable and redeploying (or restarting the service). This is the key limitation of env var flags — there is no real-time toggle.

When env var flags are enough:

  • Feature is either fully on or fully off for all users
  • You are comfortable with a redeploy to change the flag state
  • You do not need per-user targeting
  • You do not need analytics on flag exposure

When to graduate to a flag service:

  • You need to enable a feature for specific users, groups, or a percentage of traffic
  • You need to toggle flags without a deployment
  • You need analytics on who saw which variant
  • You are running a formal A/B test with statistical significance requirements

Level 2: Database Flags

When you need per-tenant or per-user flag control without a third-party service, store flags in the database.

-- Migration: add feature_flags table
CREATE TABLE feature_flags (
  id          SERIAL PRIMARY KEY,
  name        TEXT NOT NULL UNIQUE,
  enabled     BOOLEAN NOT NULL DEFAULT false,
  description TEXT,
  updated_at  TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);

CREATE TABLE user_feature_flags (
  user_id     TEXT NOT NULL,
  flag_name   TEXT NOT NULL REFERENCES feature_flags(name),
  enabled     BOOLEAN NOT NULL,
  PRIMARY KEY (user_id, flag_name)
);
// src/feature-flags/feature-flags.service.ts
@Injectable()
export class FeatureFlagsService {
  constructor(private readonly prisma: PrismaClient) {}

  async isEnabled(flagName: string, userId?: string): Promise<boolean> {
    // User-specific override takes precedence
    if (userId) {
      const userFlag = await this.prisma.userFeatureFlags.findUnique({
        where: { userId_flagName: { userId, flagName } },
      });
      if (userFlag !== null) return userFlag.enabled;
    }

    // Fall back to global flag state
    const flag = await this.prisma.featureFlags.findUnique({
      where: { name: flagName },
    });
    return flag?.enabled ?? false;
  }
}

Cache flag lookups aggressively — you do not want a database query on every request:

import NodeCache from 'node-cache';

@Injectable()
export class FeatureFlagsService {
  private readonly cache = new NodeCache({ stdTTL: 30 }); // 30 second TTL

  async isEnabled(flagName: string, userId?: string): Promise<boolean> {
    const cacheKey = `${flagName}:${userId ?? 'global'}`;
    const cached = this.cache.get<boolean>(cacheKey);
    if (cached !== undefined) return cached;

    const result = await this.lookupFlag(flagName, userId);
    this.cache.set(cacheKey, result);
    return result;
  }
}

Level 3: Feature Flag Service (LaunchDarkly / Flagsmith / PostHog)

For user targeting, gradual rollouts, and A/B testing analytics, integrate a dedicated flag service. Our recommended progression:

  • Flagsmith — open source, self-hostable, generous free tier, clean API
  • PostHog — pairs flags with product analytics, good if you already use PostHog for events
  • LaunchDarkly — enterprise-grade, real-time delivery, expensive but battle-tested

Flagsmith integration in NestJS:

pnpm add flagsmith-nodejs
// src/feature-flags/flagsmith.service.ts
import Flagsmith from 'flagsmith-nodejs';

@Injectable()
export class FlagsmithService implements OnModuleInit {
  private client: Flagsmith;

  async onModuleInit() {
    this.client = new Flagsmith({
      environmentKey: process.env.FLAGSMITH_ENVIRONMENT_KEY!,
      enableAnalytics: true,
    });
    // Pre-fetch all flags for performance
    await this.client.getEnvironmentFlags();
  }

  async isEnabled(flagName: string, userId?: string): Promise<boolean> {
    if (userId) {
      const flags = await this.client.getIdentityFlags(userId, {
        traits: { userId },
      });
      return flags.isFeatureEnabled(flagName);
    }
    const flags = await this.client.getEnvironmentFlags();
    return flags.isFeatureEnabled(flagName);
  }

  async getVariant(flagName: string, userId: string): Promise<string | null> {
    const flags = await this.client.getIdentityFlags(userId);
    return flags.getFeatureValue(flagName) as string | null;
  }
}

Flagsmith integration in React:

pnpm add flagsmith react-flagsmith
// src/providers/FlagsmithProvider.tsx
import { FlagsmithProvider } from 'react-flagsmith';
import flagsmith from 'flagsmith';

export function AppFlagsmithProvider({ children, userId }: {
  children: React.ReactNode;
  userId?: string;
}) {
  return (
    <FlagsmithProvider
      flagsmith={flagsmith}
      options={{
        environmentID: import.meta.env.VITE_FLAGSMITH_ENVIRONMENT_KEY,
        identity: userId, // enables user-specific flags
        cacheFlags: true,
        enableAnalytics: true,
      }}
    >
      {children}
    </FlagsmithProvider>
  );
}
// In a component
import { useFlags } from 'react-flagsmith';

export function CheckoutPage() {
  const { beta_checkout } = useFlags(['beta_checkout']);

  return (
    <div>
      {beta_checkout.enabled ? <CheckoutV2 /> : <CheckoutV1 />}
    </div>
  );
}

Feature Flags in NestJS Guards

The cleanest way to protect entire routes behind a flag is a NestJS guard — the equivalent of [FeatureGate] in ASP.NET Core:

// src/guards/feature-flag.guard.ts
import { CanActivate, ExecutionContext, Injectable, SetMetadata } from '@nestjs/common';
import { Reflector } from '@nestjs/core';
import { FeatureFlagsService } from '../feature-flags/feature-flags.service';

export const RequireFlag = (flagName: string) =>
  SetMetadata('featureFlag', flagName);

@Injectable()
export class FeatureFlagGuard implements CanActivate {
  constructor(
    private readonly reflector: Reflector,
    private readonly flags: FeatureFlagsService,
  ) {}

  async canActivate(context: ExecutionContext): Promise<boolean> {
    const flagName = this.reflector.get<string>('featureFlag', context.getHandler());
    if (!flagName) return true; // no flag requirement, allow through

    const request = context.switchToHttp().getRequest();
    const userId = request.user?.id;
    return this.flags.isEnabled(flagName, userId);
  }
}
// Usage on a controller method
@Get('checkout-v2')
@UseGuards(AuthGuard, FeatureFlagGuard)
@RequireFlag('beta_checkout')
async checkoutV2(@Req() req: Request) {
  // Only reachable if beta_checkout flag is enabled for this user
}

Progressive Rollout

Percentage-based rollouts let you expose a feature to N% of users without a third-party service:

// Deterministic percentage rollout: same user always gets same experience
function isInRollout(userId: string, flagName: string, percentage: number): boolean {
  if (percentage <= 0) return false;
  if (percentage >= 100) return true;

  // Hash userId + flagName to a stable number in [0, 100)
  const hash = cyrb53(`${flagName}:${userId}`) % 100;
  return hash < percentage;
}

// Simple non-cryptographic hash (good enough for rollouts, not for security)
function cyrb53(str: string): number {
  let h1 = 0xdeadbeef, h2 = 0x41c6ce57;
  for (let i = 0; i < str.length; i++) {
    const ch = str.charCodeAt(i);
    h1 = Math.imul(h1 ^ ch, 2654435761);
    h2 = Math.imul(h2 ^ ch, 1597334677);
  }
  h1 = Math.imul(h1 ^ (h1 >>> 16), 2246822507) ^ Math.imul(h2 ^ (h2 >>> 13), 3266489909);
  h2 = Math.imul(h2 ^ (h2 >>> 16), 2246822507) ^ Math.imul(h1 ^ (h1 >>> 13), 3266489909);
  return 4294967296 * (2097151 & h2) + (h1 >>> 0);
}

Store the percentage in your database flags table and query it:

async isEnabled(flagName: string, userId?: string): Promise<boolean> {
  const flag = await this.prisma.featureFlags.findUnique({
    where: { name: flagName },
  });

  if (!flag?.enabled) return false;

  // If rolloutPercentage is set, check if this user is in the rollout
  if (flag.rolloutPercentage !== null && userId) {
    return isInRollout(userId, flagName, flag.rolloutPercentage);
  }

  return flag.enabled;
}

A/B Testing Basics

A proper A/B test requires:

  1. Assignment — each user is deterministically assigned to variant A or B
  2. Exposure logging — record which users saw which variant
  3. Outcome logging — record which users completed the goal action
  4. Analysis — calculate statistical significance of the difference
// Simple A/B assignment with exposure logging
type Variant = 'control' | 'treatment';

async function assignVariant(
  userId: string,
  experimentName: string,
): Promise<Variant> {
  const hash = cyrb53(`${experimentName}:${userId}`) % 100;
  const variant: Variant = hash < 50 ? 'control' : 'treatment';

  // Log exposure to analytics
  await this.analytics.track({
    event: 'experiment_exposure',
    userId,
    properties: {
      experimentName,
      variant,
    },
  });

  return variant;
}

For anything beyond simple assignment, use PostHog (which combines analytics and flags) or a dedicated experimentation platform. Do not build your own statistical analysis — the math for correct significance testing is subtle and easy to get wrong.


Key Differences

.NET/Azure ApproachJS/TS Approach
Microsoft.FeatureManagement + Azure App ConfigurationRoll your own (env vars, DB) or use Flagsmith/PostHog/LaunchDarkly
[FeatureGate("Flag")] attribute on action methodsNestJS FeatureFlagGuard + @RequireFlag() decorator
IFeatureManager.IsEnabledAsync()featureFlagsService.isEnabled() injected via DI
Real-time flag update without redeploy (Azure App Config)Env var flags require redeploy; DB flags update immediately; flag services are real-time
Feature filters (percentage, targeting groups)Implement manually or use a flag service
Integrated A/B testing (LaunchDarkly)PostHog or LaunchDarkly; basic assignment is simple, analysis requires tooling

Gotchas for .NET Engineers

Gotcha 1: Env var flags require a redeploy on Render. Azure App Configuration updates propagate to running instances without a restart (via polling or push). Render environment variable changes do not take effect until you redeploy or restart the service. If you need real-time flag toggles — to kill a bad feature immediately without a deployment pipeline — env var flags alone are insufficient. This is the primary reason to graduate to a database-backed or service-backed flag system.

Gotcha 2: Client-side and server-side flags must be kept in sync. In .NET, you typically have one source of truth for flag state (Azure App Config or LaunchDarkly server SDK). In a Next.js or Nuxt application, flag checks happen in three places: the server (during SSR), the API (during request handling), and the client (during hydration and user interaction). If these three read from different sources or have different evaluation timing, you get inconsistent behavior — a user sees the new feature rendered on the server but the old feature kicks in on the client. Use the same flag service on both sides, and pass initial flag state from server to client during SSR to avoid a flash of wrong content.

Gotcha 3: Feature flags are permanent technical debt without a cleanup policy. Every feature flag is a branch in your code. Flags that ship and never get removed accumulate over time until no one is confident about what combination of flags is valid in production. Establish a rule when you create each flag: what is the condition under which this flag is removed? Commonly: the flag is removed in the sprint after it reaches 100% rollout. Add a TODO(TICKET-123): remove flag after full rollout comment at every flag evaluation site, and track flag removal as part of the original feature ticket.

Gotcha 4: Percentage rollouts must be deterministic, not random. A common mistake is using Math.random() < 0.10 to implement a 10% rollout. This assigns a user to a random variant on each page load or request, so a user sees the new feature on one request and the old feature on the next. Use a hash of the user ID (as shown above) so each user is always in the same bucket. This matters for both user experience and statistical validity of any A/B test.

Gotcha 5: Flag state in tests must be explicit. In .NET, IFeatureManager is an interface you can mock. In your JS/TS codebase, if flags are read directly from process.env, tests that run in a fixed environment will always see the same flag state. Either inject a flag service (mockable) or set environment variables in test setup:

// In vitest.config.ts or test setup
process.env.FEATURE_BETA_CHECKOUT = 'false';

// Or inject a mock service
const mockFlags = { isEnabled: vi.fn().mockResolvedValue(true) };

Always test both flag states — do not let the flag default cause one path to go untested.


Hands-On Exercise

Implement a three-level feature flag system for a NestJS + React application.

Part 1: Environment variable flag

  1. Add FEATURE_NEW_DASHBOARD=true to your .env file.
  2. Create a features.ts config file that reads all FEATURE_* environment variables.
  3. Use the flag in a React component to conditionally render an old and new dashboard component.
  4. Verify the toggle works by changing the env var value.

Part 2: Database flag with caching

  1. Add a feature_flags table to your Prisma schema with name, enabled, and rollout_percentage columns.
  2. Create a FeatureFlagsService with an isEnabled(flagName, userId?) method.
  3. Add a 30-second in-process cache using node-cache.
  4. Create an admin endpoint to update flag state (protected by an admin guard).
  5. Test that changing the flag in the database takes effect within 30 seconds without a redeploy.

Part 3: Guard and decorator

  1. Implement the FeatureFlagGuard and @RequireFlag() decorator from the examples above.
  2. Apply @RequireFlag('new_dashboard_api') to a new API endpoint.
  3. Verify that the endpoint returns 403 when the flag is disabled and 200 when it is enabled.
  4. Write a test for both states by mocking FeatureFlagsService.

Part 4: Progressive rollout

  1. Add the cyrb53 hash function and isInRollout utility to your codebase.
  2. Update FeatureFlagsService.isEnabled to respect the rollout_percentage column.
  3. Set the rollout to 50% and write a test that generates 1,000 different user IDs and verifies that approximately 500 receive true.
  4. Verify the hash is deterministic: the same user ID always produces the same result.

Quick Reference

Decision Tree: Which Flag Approach?

Do you need to toggle without a redeploy?
├── No → Environment variable flag (simplest)
└── Yes
    ├── Do you need per-user/per-tenant targeting?
    │   ├── No → Database flag with cache
    │   └── Yes
    │       ├── Do you need A/B test analytics?
    │       │   ├── No → Database flag with rollout_percentage
    │       │   └── Yes → PostHog or LaunchDarkly
    │       └── Do you need real-time delivery (<1s toggle)?
    │           ├── No → Database flag (30s cache TTL)
    │           └── Yes → Flagsmith or LaunchDarkly

Flag Naming Conventions

# Environment variables: FEATURE_ prefix, SCREAMING_SNAKE_CASE
FEATURE_BETA_CHECKOUT=true
FEATURE_NEW_TAX_ENGINE=false
FEATURE_AI_SUGGESTIONS=true

# Database / service flag names: snake_case
beta_checkout
new_tax_engine
ai_suggestions

# TypeScript config object keys: camelCase
features.betaCheckout
features.newTaxEngine
features.aiSuggestions

NestJS Integration Pattern

// 1. Define the service interface
interface IFeatureFlagsService {
  isEnabled(flagName: string, userId?: string): Promise<boolean>;
}

// 2. Inject where needed
constructor(@Inject(FEATURE_FLAGS_SERVICE) private flags: IFeatureFlagsService) {}

// 3. Guard usage
@UseGuards(FeatureFlagGuard)
@RequireFlag('my_feature')
@Get('my-route')
async myRoute() { ... }

// 4. Service usage
const useNewFlow = await this.flags.isEnabled('new_checkout_flow', user.id);

Flag Cleanup Checklist

When a flag reaches 100% rollout and has been stable for one sprint:

  • Remove all isEnabled() calls for the flag
  • Delete the “off” code path entirely (the old behavior)
  • Remove the flag from feature_flags table (or set to deprecated)
  • Remove any A/B test tracking for this flag
  • Update any documentation that mentions the flag
  • Close the original feature flag ticket

Comparison: Options

OptionRedeploy to change?User targetingAnalyticsCost
Env varYesNoNoFree
DB flagNo (< cache TTL)Yes (custom)NoYour DB costs
FlagsmithNoYesBasicFree tier available
PostHog flagsNoYesFull product analyticsFree tier available
LaunchDarklyNoYesFull$$$

Further Reading

Technical Writing and Documentation

For .NET engineers who know: XML doc comments, Sandcastle, Azure DevOps Wiki, Confluence, and the Microsoft documentation style used across MSDN and learn.microsoft.com You’ll learn: How documentation works in open-source JS/TS projects — README-driven development, TSDoc, ADRs, and the docs-as-code philosophy — and our specific conventions for new projects Time: 10-15 min read


The .NET Way (What You Already Know)

Microsoft has a mature, consistent documentation culture built around a few tools. XML doc comments in C# (/// <summary>, /// <param>, /// <returns>) are first-class language constructs that IDEs render on hover and that Sandcastle or DocFX compile into browsable HTML documentation. Azure DevOps Wiki provides a centralized wiki with Markdown support, often used for runbooks, onboarding guides, and architectural notes. Confluence is common in larger organizations. Swagger/Swashbuckle generates API reference from controller annotations.

The pattern: documentation lives in dedicated tools or generated from structured comments. It is authoritative when someone goes looking for it, but it is separate from the code and can drift.

/// <summary>
/// Calculates the total price for an order, including applicable taxes
/// and discounts. Returns zero if the order has no line items.
/// </summary>
/// <param name="order">The order to calculate. Must not be null.</param>
/// <param name="applyDiscount">When true, applies the customer's loyalty discount.</param>
/// <returns>The total price in USD as a decimal, rounded to two decimal places.</returns>
/// <exception cref="ArgumentNullException">Thrown when <paramref name="order"/> is null.</exception>
public decimal CalculateTotal(Order order, bool applyDiscount = true)

This is good documentation. The hover tooltip in Visual Studio is genuinely useful. The generated HTML documentation is browsable. The problem is maintenance: when you change the behavior of CalculateTotal, the docs do not automatically fail CI. They silently drift.


The JS/TS Way

Open-source JS/TS projects use a fundamentally different documentation model: docs-as-code. Documentation is:

  • Written in Markdown, stored in the same repository as the code
  • Versioned with Git alongside the code it documents
  • Reviewed in pull requests alongside code changes
  • Checked for broken links and formatting in CI

The philosophy is pragmatic: documentation that is hard to update will not be updated. Keeping docs in the repo, in the same PR as code changes, is the only reliable way to keep them current.

The README as the Front Door

Every repository has a README.md that serves as the primary entry point for anyone who encounters the project. On GitHub, it renders automatically on the repository home page. A README is not optional decoration — it is the first thing a new engineer reads, and its quality signals the maturity of the project.

A complete README includes:

# Project Name

One sentence describing what this project does and who it is for.

## Prerequisites

- Node.js 22+ (use `.nvmrc` or `volta pin`)
- pnpm 9+
- Docker (for local database)

## Getting Started

\`\`\`bash
git clone git@github.com:org/project.git
cd project
pnpm install
cp .env.example .env
# Edit .env with your local values
pnpm db:migrate
pnpm dev
\`\`\`

Open http://localhost:3000.

## Project Structure

\`\`\`
src/
  modules/        # NestJS feature modules
  common/         # Shared utilities and guards
  config/         # Environment configuration
  prisma/         # Database schema and migrations
tests/
  unit/
  e2e/
\`\`\`

## Available Commands

| Command | Description |
|---------|-------------|
| `pnpm dev` | Start development server with hot reload |
| `pnpm build` | Compile for production |
| `pnpm test` | Run unit tests |
| `pnpm test:e2e` | Run end-to-end tests |
| `pnpm db:migrate` | Apply pending migrations |
| `pnpm db:studio` | Open Prisma Studio (visual DB browser) |

## Environment Variables

See `.env.example` for all required variables with descriptions.

## Architecture

Brief description of the key design decisions. Link to ADRs in `/docs/decisions/` for detailed rationale.

## Contributing

See [CONTRIBUTING.md](./CONTRIBUTING.md) for branch naming, PR process, and coding standards.

What a README should not contain:

  • Comprehensive API reference (put this in code comments or a separate docs directory)
  • Architectural rationale for decisions made months ago (put this in ADRs)
  • Exhaustive troubleshooting guides (put this in runbooks)
  • A history of changes (that is what git log is for)

TSDoc: Documenting TypeScript APIs

TSDoc is the TypeScript equivalent of XML doc comments. It uses /** */ block comments with standardized tags. VS Code, WebStorm, and most TypeScript-aware tools render these as hover documentation.

/**
 * Calculates the total price for an order, including applicable taxes
 * and discounts. Returns `0` if the order has no line items.
 *
 * @param order - The order to calculate. Must have at least one line item.
 * @param options - Optional calculation parameters.
 * @param options.applyDiscount - When `true`, applies the customer's loyalty discount.
 * @returns The total price in USD, rounded to two decimal places.
 * @throws {@link OrderValidationError} When the order fails validation.
 *
 * @example
 * ```typescript
 * const total = calculateOrderTotal(order, { applyDiscount: true });
 * console.log(total); // 42.99
 * ```
 */
export function calculateOrderTotal(
  order: Order,
  options: { applyDiscount?: boolean } = {}
): number {
  // ...
}

TSDoc differs from XML docs in important ways:

  • Tags use @param name - description instead of <param name="x">description</param>
  • The @example tag takes a fenced code block, not a <code> element
  • There is no equivalent to <exception> — use @throws with a type reference
  • @internal marks a symbol as not part of the public API (tools can strip it)
  • @deprecated marks a symbol with a reason and replacement

TSDoc is enforced by ESLint with eslint-plugin-tsdoc:

pnpm add -D eslint-plugin-tsdoc
// eslint.config.js
import tsdoc from 'eslint-plugin-tsdoc';

export default [
  {
    plugins: { tsdoc },
    rules: {
      'tsdoc/syntax': 'warn',
    },
  },
];

When to write TSDoc:

  • All exported functions, classes, and interfaces in a library or shared module
  • All public methods on NestJS services that other services call
  • Anything with non-obvious parameters, side effects, or error conditions
  • Public API endpoints (supplement Swagger, do not replace it)

When to skip TSDoc:

  • Private implementation details that are not part of any API surface
  • Simple one-line functions where the name and types are self-documenting
  • Internal helpers used only within the same file

Swagger / OpenAPI for API Reference

NestJS generates Swagger documentation from decorators. This is the authoritative API reference for anyone calling your backend.

// main.ts
import { SwaggerModule, DocumentBuilder } from '@nestjs/swagger';

const config = new DocumentBuilder()
  .setTitle('Orders API')
  .setDescription('Order management API for the checkout system')
  .setVersion('1.0')
  .addBearerAuth()
  .build();

const document = SwaggerModule.createDocument(app, config);
SwaggerModule.setup('api/docs', app, document);
// In your controller and DTOs
import { ApiOperation, ApiResponse, ApiTags, ApiProperty } from '@nestjs/swagger';

export class CreateOrderDto {
  @ApiProperty({ description: 'Product ID from the catalog', example: 'prod_123' })
  productId: string;

  @ApiProperty({ description: 'Quantity to order', minimum: 1, maximum: 100 })
  quantity: number;
}

@ApiTags('orders')
@Controller('orders')
export class OrdersController {
  @Post()
  @ApiOperation({ summary: 'Place a new order' })
  @ApiResponse({ status: 201, description: 'Order created', type: OrderResponseDto })
  @ApiResponse({ status: 400, description: 'Invalid order data' })
  @ApiResponse({ status: 401, description: 'Not authenticated' })
  async createOrder(@Body() dto: CreateOrderDto) { ... }
}

Swagger UI is available at /api/docs in development and staging. In production, either disable it or protect it behind authentication — it is a complete map of your API surface.

Architecture Decision Records (ADRs)

An ADR documents a significant architectural decision: why you made it, what alternatives you considered, and what the trade-offs are. ADRs are the institutional memory of a project — they answer the question “why is it done this way?” without requiring anyone to be in the room when the decision was made.

File location and naming:

docs/
  decisions/
    0001-use-prisma-over-typeorm.md
    0002-use-nestjs-for-api-layer.md
    0003-postgres-as-primary-database.md
    0004-use-pnpm-workspaces-for-monorepo.md

ADR template:

# ADR 0004: Use pnpm Workspaces for Monorepo

**Date:** 2026-01-15
**Status:** Accepted
**Deciders:** Chris Therriault, [team members]

## Context

We are building a project with a NestJS API, a Next.js frontend, and shared
TypeScript types. We need a strategy for managing these three packages in a
single repository with shared dependencies.

## Decision

We will use pnpm workspaces to manage the monorepo. The workspace root will
contain a `pnpm-workspace.yaml` defining the packages. Shared types live in
`packages/types`. Build orchestration uses Turborepo.

## Alternatives Considered

**npm workspaces** — Available since npm 7. Rejected because pnpm's symlink
strategy avoids phantom dependencies and the store reduces disk usage.

**Nx** — More powerful build graph and code generation. Rejected because the
learning curve and configuration complexity exceeds our needs for 3 packages.
Revisit if the monorepo grows beyond 10 packages.

**Separate repositories** — Simpler per-repo tooling. Rejected because shared
type changes would require coordinated cross-repo PRs.

## Consequences

**Positive:**
- Single `pnpm install` installs all package dependencies
- TypeScript path aliases work across packages without publishing to npm
- Shared CI/CD pipeline configuration

**Negative:**
- pnpm-specific workspace syntax (not portable to npm/yarn without changes)
- Turborepo adds configuration complexity

## References

- [pnpm Workspaces documentation](https://pnpm.io/workspaces)
- [Turborepo documentation](https://turbo.build/repo/docs)

ADR rules:

  • Write the ADR at the time of the decision, not retrospectively
  • Status values: Proposed, Accepted, Deprecated, Superseded by ADR-XXXX
  • Supersede, do not delete. A deprecated decision is still history.
  • Keep ADRs short — if you are writing more than 500 words, you are probably documenting implementation details, not a decision

Runbooks

A runbook documents how to respond to a known operational situation: a recurring incident, a deployment procedure, a database maintenance task. Runbooks are for the human on call at 2am who needs to act quickly without reading code.

File location:

docs/
  runbooks/
    restart-background-jobs.md
    rotate-database-credentials.md
    handle-payment-webhook-failures.md
    emergency-rollback.md

Runbook template:

# Runbook: Emergency Rollback on Render

**Trigger:** A deployment causes errors, and the fix cannot be deployed immediately.
**Owner:** On-call engineer
**Time to complete:** ~5 minutes

## Prerequisites

- Access to the Render dashboard
- Access to Sentry to confirm error rates

## Steps

1. Open Render dashboard → [Service Name] → Deploys
2. Find the last known-good deploy (before the incident started)
3. Click "Rollback to this deploy"
4. Wait ~60 seconds for the rollback to complete
5. Verify in Sentry that the error rate drops
6. Post in #incidents Slack channel: "Rolled back [service] to [commit SHA]"
7. Create a post-incident task to fix the root cause before re-deploying

## Verification

After rollback, call `GET /health` and confirm the response is `{"status":"ok"}`.
Check Sentry: the incident's error rate should drop to zero within 2 minutes.

## Notes

Rollback does not revert database migrations. If the deployment included a
migration that is incompatible with the previous code version, a rollback alone
may not be sufficient. Escalate to senior engineer if the health check fails
after rollback.

The docs-as-code Workflow

The practical implication of keeping docs in the repository:

  1. New feature PR includes documentation. The PR that adds a new API endpoint also updates the README if the architecture changes, adds TSDoc to the new service, and adds/updates the relevant runbook if operational behavior changes.

  2. ADRs are reviewed in PRs. An ADR for a significant architectural decision is committed as a PR for team review before it is accepted.

  3. Broken documentation fails CI. Use tools to catch drift:

# .github/workflows/docs.yml
- name: Check for broken markdown links
  uses: gaurav-nelson/github-action-markdown-link-check@v1
  with:
    folder-path: './docs'
  1. Old documentation is updated, not appended. Do not write “UPDATE 2026-02-15: this no longer applies” in the middle of a document. Update the document to reflect current reality. Git history preserves the old version.

Key Differences

.NET/Azure ApproachJS/TS Approach
XML doc comments (///)TSDoc (/** */) with @param, @returns, @throws
Sandcastle / DocFX for generated docsTypeDoc or raw TSDoc rendered in IDE
Azure DevOps WikiMarkdown files in docs/ directory in the repo
Confluence for architecture docsADRs in docs/decisions/
Swagger generated from Swashbuckle XMLSwagger generated from @nestjs/swagger decorators
Documentation in a separate systemDocumentation in the repository, reviewed in PRs
Wiki page can be updated without a PRDoc change requires a PR (intentional friction)
<exception cref="T"> for error docs@throws {@link ErrorType} in TSDoc

Gotchas for .NET Engineers

Gotcha 1: Markdown formatting is more precise than it looks. Two spaces at the end of a line create a line break. One blank line creates a paragraph break. Four spaces at the start of a line create a code block. These rules vary slightly between Markdown parsers (CommonMark, GitHub Flavored Markdown, MDX). If your documentation renders incorrectly on GitHub, the cause is almost always whitespace or indentation in the Markdown source. Use a linter (markdownlint) to catch formatting issues before they reach the PR.

Gotcha 2: TSDoc is not JSDoc. You may encounter JSDoc in older codebases: @param {string} name - description. TypeScript projects should use TSDoc instead: @param name - description. The types come from TypeScript annotations, not JSDoc {type} syntax. Mixing them causes confusion and may produce incorrect hover documentation. ESLint with eslint-plugin-tsdoc enforces correct TSDoc syntax.

Gotcha 3: README rot is the most common documentation failure. A README written during initial setup and never updated is worse than no README — it actively misleads new engineers. Getting Started instructions that no longer work, command names that have changed, prerequisites that are out of date. Designate one person per project to own the README, and add “does the README reflect this change?” to your PR checklist. Better: include a README accuracy check as part of your onboarding checklist (Article 8.8), because a new engineer running the Getting Started steps will immediately identify drift.

Gotcha 4: ADRs are written once, not maintained. An ADR documents a decision at a point in time. You do not update an ADR when circumstances change — you write a new ADR that supersedes it. If you update an old ADR to reflect a new decision, you lose the historical record of what was true when the original decision was made and why it was later changed. When a decision is reversed, write Status: Superseded by ADR-0012 in the old ADR and reference the old one in the new one.

Gotcha 5: Swagger in production exposes your API surface publicly. In .NET, Swagger middleware is commonly disabled in production via if (env.IsDevelopment()). In NestJS projects, it is easy to forget this guard. Your Swagger UI at /api/docs is a complete, interactive documentation of every endpoint, parameter type, and response schema. In production, either disable it entirely or protect it with authentication middleware. This is not about security through obscurity — Swagger is genuinely a productivity tool for frontend engineers. It is about not handing attackers a structured attack surface map.


Hands-On Exercise

Create the complete documentation structure for a new NestJS project.

Part 1: README

  1. Start a new NestJS project or use an existing one.
  2. Write a README.md using the template above. Fill in every section for your actual project — do not use placeholder text.
  3. Verify: a new engineer who has never seen the project can follow the “Getting Started” section and get the application running without additional help.
  4. Add markdownlint to the project (pnpm add -D markdownlint-cli2) and add a lint check to package.json:
    "lint:docs": "markdownlint-cli2 '**/*.md' '#node_modules'"
    

Part 2: TSDoc

  1. Find three functions in the project that are exported and have non-obvious behavior.
  2. Write full TSDoc for each: @param, @returns, @throws (if applicable), and at least one @example.
  3. Install eslint-plugin-tsdoc and configure it. Fix any TSDoc syntax warnings.
  4. Hover over the documented functions in VS Code and verify the hover tooltip renders the documentation correctly.

Part 3: ADR

  1. Write an ADR for one significant decision already made in the project (which ORM you chose, which auth provider, which hosting platform). Use the ADR template above.
  2. Create docs/decisions/0001-your-decision.md.
  3. Have a colleague or AI reviewer check that the ADR answers: what was the decision, what alternatives were considered, and what are the consequences?

Part 4: Runbook

  1. Write a runbook for the most likely operational scenario in your project: “what do you do if the application returns 500 errors and you need to roll back?”
  2. Include specific steps with exact UI element names or CLI commands.
  3. Have someone unfamiliar with the infrastructure follow the runbook in staging and identify any steps that are unclear or missing.

Quick Reference

Documentation Locations

Document typeLocationTool
Project overview and setupREADME.md (root)Markdown
API code documentationInline TSDoc on exported symbolsTSDoc / eslint-plugin-tsdoc
REST API referenceAuto-generated at /api/docs@nestjs/swagger
Architectural decisionsdocs/decisions/NNNN-slug.mdMarkdown (ADR format)
Operational runbooksdocs/runbooks/slug.mdMarkdown
Component/function examplesdocs/examples/ or inline @exampleTSDoc / Storybook
ChangelogCHANGELOG.mdConventional Commits + changelogen
Contributing guideCONTRIBUTING.mdMarkdown

TSDoc Cheat Sheet

/**
 * Short summary sentence (appears in hover tooltip).
 *
 * Optional longer description. Can span multiple paragraphs.
 * Use code formatting: `variableName` or code blocks below.
 *
 * @param paramName - Description. Note the dash after the name.
 * @param options - Options object.
 * @param options.fieldName - Individual option field.
 * @returns What the function returns. Describe the shape for complex types.
 * @throws {@link ErrorClassName} When this error is thrown and why.
 * @deprecated Use {@link newFunction} instead. Removed in v3.0.
 * @internal Not part of the public API. Subject to change without notice.
 * @see {@link relatedFunction} for a related operation.
 *
 * @example
 * ```typescript
 * const result = myFunction('input', { option: true });
 * // result: 'expected output'
 * ```
 */

ADR Statuses

StatusMeaning
ProposedUnder discussion, not yet committed
AcceptedDecision has been made and implemented
DeprecatedNo longer relevant (technology was removed)
Superseded by ADR-XXXXReplaced by a newer decision

Our Project Templates

New project documentation checklist:

  • README.md with all sections filled in (Prerequisites, Getting Started, Commands, Environment Variables, Architecture)
  • .env.example with every variable listed and documented inline
  • docs/decisions/0001-*.md for the primary technology choices (why this framework, why this database, why this hosting)
  • docs/runbooks/emergency-rollback.md before the first production deployment
  • TSDoc on all exported service methods and utility functions
  • Swagger configured and accessible at /api/docs in development and staging
  • markdownlint-cli2 in dev dependencies and in CI

PR checklist additions:

  • Does the README reflect any changes to setup, commands, or environment variables?
  • Are new exported functions documented with TSDoc?
  • Does this decision warrant an ADR?
  • Is there a runbook to add or update?

Further Reading

Team Onboarding Checklist

For .NET engineers who know: The full Microsoft stack and are now joining a team using the JS/TS stack covered in this curriculum You’ll learn: The concrete, ordered steps to go from zero to productive — accounts to create, tools to install, repos to run, and the 30/60/90-day plan to fully ramp Time: 10-15 min read (then use the checklist as a working document)


The .NET Way (What You Already Know)

Onboarding to a .NET shop typically involves: getting a Windows machine joined to the domain, installing Visual Studio, connecting to Azure DevOps, cloning the solution, configuring connection strings in appsettings.Development.json, and asking someone to explain the solution architecture. The IDE does most of the heavy lifting — it finds the projects, resolves NuGet packages, and gives you a working F5 experience within an hour.

The toolchain is integrated. One installer (Visual Studio) handles the compiler, debugger, NuGet, and project system. Azure DevOps handles source control, CI, and work items. Azure handles hosting. The number of external accounts and CLI tools you need is low.

The JS/TS stack is different. You interact with more services, more CLI tools, and more configuration files. The trade-off is that the tools are lighter, faster, and composable — but the onboarding footprint is larger.

This article is your complete checklist. Print it, save it as a GitHub issue, or open it in a second window and work through it top to bottom.


The Onboarding Model

Onboarding happens in four stages:

  1. Accounts — create accounts and get access to the systems your team uses
  2. Local environment — install the tools needed to run the codebase
  3. First run — clone repos, get the apps running locally, make a change
  4. First contributions — fix a bug, add a feature, deploy to preview

Stages 1 and 2 can be done before your first day if you have access to the account creation steps. Stage 3 is typically day one. Stage 4 starts day two and takes through the end of week one.


Stage 1: Accounts

Create these accounts in order. Some require approval from an existing team member — flag those immediately so approval is not a blocker.

Version Control and Code Quality

  • GitHub accountgithub.com. Create a personal account if you do not have one. Your employer does not own your GitHub account. Use your work email as a secondary email for commit attribution if required.

    • Ask team lead to add you to the organization and the relevant team.
    • Enable two-factor authentication. This is required for all organizations we work with.
  • SonarCloudsonarcloud.io. Log in with your GitHub account. SonarCloud uses GitHub OAuth.

    • Ask team lead to add you to the organization.
    • Verify you can see the project dashboards after being added.
  • Snykapp.snyk.io. Log in with your GitHub account.

    • Ask team lead to add you to the organization.
    • Snyk scans dependencies for known vulnerabilities; you will see its PR checks immediately.
  • Semgrepsemgrep.dev. Log in with GitHub.

    • Ask team lead to add you to the organization.
    • Semgrep performs static analysis and security pattern matching; its findings appear in PRs.

Hosting and Infrastructure

  • Renderrender.com. Create an account and log in with GitHub.
    • Ask team lead to add you to the team.
    • Verify you can see the services dashboard after being added.
    • Render is our hosting platform. You will deploy here and read logs here.

Observability

  • Sentrysentry.io. Create an account.
    • Ask team lead to add you to the organization.
    • Verify you can see the project issues after being added.
    • Sentry is where you go when something breaks in production.

Authentication (Application)

  • Clerkclerk.com. Create an account.
    • Ask team lead to add you to the organization.
    • Clerk manages user authentication for the applications you will build. You need dashboard access to configure auth settings and view user data in development.

AI Coding Assistant

  • Claude Code — Install via npm (covered in Stage 2). You need an Anthropic account to use Claude Code.
    • Go to console.anthropic.com and create an account.
    • Get an API key from the API Keys section. Store it in your password manager; you will not see it again.
    • Alternatively, ask the team lead if there is a shared workspace API key for the team.

Stage 2: Local Environment

Work through this section in order. Each tool is a prerequisite for the next.

Terminal

On macOS, use Terminal.app or install iTerm2. On Linux, use your distribution’s default terminal. On Windows, use WSL2 with Ubuntu — the JS/TS toolchain is designed for Unix. Running natively on Windows without WSL2 works but produces inconsistent behavior with file watchers, line endings, and shell scripts.

# Verify your shell
echo $SHELL
# Should be /bin/zsh (macOS) or /bin/bash (Linux)

Git

Git is almost certainly already installed. Verify and configure it.

# Verify
git --version
# Should be 2.40+

# Configure your identity (use the same email as your GitHub account)
git config --global user.name "Your Name"
git config --global user.email "you@example.com"

# Set default branch name to main
git config --global init.defaultBranch main

# Set VS Code as the default git editor
git config --global core.editor "code --wait"

Configure SSH for GitHub. This avoids HTTPS credential prompts.

# Generate a key (if you do not have one)
ssh-keygen -t ed25519 -C "you@example.com"

# Add to SSH agent
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_ed25519

# Copy the public key
cat ~/.ssh/id_ed25519.pub

Add the public key to your GitHub account: Settings → SSH and GPG keys → New SSH key.

# Verify it works
ssh -T git@github.com
# Should respond: Hi username! You've successfully authenticated...

Node.js

We use fnm (Fast Node Manager) to manage Node.js versions. Do not install Node.js directly from nodejs.org — it installs a system-wide version that cannot be swapped per project.

# Install fnm
curl -fsSL https://fnm.vercel.app/install | bash

# Reload your shell profile
source ~/.zshrc  # or ~/.bashrc

# Install the Node.js version we use (check .nvmrc in the repo, or use LTS)
fnm install 22
fnm use 22
fnm default 22

# Verify
node --version  # v22.x.x
npm --version   # 10.x.x

pnpm

pnpm is our package manager. Do not use npm or yarn on team projects — they produce different lockfiles and will cause merge conflicts.

# Install pnpm via the standalone installer (recommended)
curl -fsSL https://get.pnpm.io/install.sh | sh -

# Or via npm
npm install -g pnpm

# Verify
pnpm --version  # 9.x.x

Docker

Docker runs our local PostgreSQL database and other services. Install Docker Desktop from docker.com/products/docker-desktop.

# Verify
docker --version       # Docker 27.x.x or similar
docker compose version # Docker Compose version v2.x.x

VS Code

Download VS Code from code.visualstudio.com. After installing, open VS Code and install the required extensions.

Required extensions:

# Install from the command line (run after VS Code is installed and `code` is in PATH)
code --install-extension dbaeumer.vscode-eslint
code --install-extension esbenp.prettier-vscode
code --install-extension bradlc.vscode-tailwindcss
code --install-extension Prisma.prisma
code --install-extension ms-vscode.vscode-typescript-next
code --install-extension Vue.volar
code --install-extension ms-playwright.playwright
code --install-extension github.copilot  # if team uses Copilot alongside Claude

Recommended VS Code settings — add these to your settings.json (Cmd+Shift+P → “Open User Settings JSON”):

{
  "editor.defaultFormatter": "esbenp.prettier-vscode",
  "editor.formatOnSave": true,
  "editor.codeActionsOnSave": {
    "source.fixAll.eslint": "explicit"
  },
  "typescript.preferences.importModuleSpecifier": "relative",
  "typescript.updateImportsOnFileMove.enabled": "always",
  "eslint.validate": ["javascript", "typescript", "typescriptreact", "vue"],
  "files.insertFinalNewline": true,
  "files.trimTrailingWhitespace": true
}

GitHub CLI

The GitHub CLI (gh) is used extensively for PRs, issues, and CI status. We work GitHub-first; most workflows happen via gh rather than the web interface.

# Install on macOS
brew install gh

# Install on Linux
# Follow instructions at: https://github.com/cli/cli/blob/trunk/docs/install_linux.md

# Authenticate
gh auth login
# Choose: GitHub.com → HTTPS or SSH → Log in with a web browser
# Follow the prompts

# Verify
gh auth status

Claude Code

# Install Claude Code globally
npm install -g @anthropic-ai/claude-code

# Run the setup (it will prompt for your API key)
claude

# Or set the API key directly
export ANTHROPIC_API_KEY=sk-ant-...
# Add this line to your ~/.zshrc or ~/.bashrc to persist it

Verify Claude Code works:

# In any project directory
claude --help

Stage 3: Repos and First Run

Clone the Repositories

Ask your team lead which repositories you need. A typical setup:

# Create a development directory
mkdir ~/dev && cd ~/dev

# Clone the main application repo (example — use actual repo URL)
git clone git@github.com:your-org/main-app.git
cd main-app

# Install dependencies
pnpm install

# Copy the example environment file and fill in your values
cp .env.example .env

The .env file contains local secrets and configuration. Your team lead will share the development values. Common required variables:

# .env (never commit this file)
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/appdb
CLERK_SECRET_KEY=sk_test_...
CLERK_PUBLISHABLE_KEY=pk_test_...
SENTRY_DSN=https://...@sentry.io/...

Start the Local Database

# Start PostgreSQL via Docker Compose
docker compose up -d postgres

# Run database migrations
pnpm db:migrate

# Optional: seed with development data
pnpm db:seed

# Open Prisma Studio to browse the database
pnpm db:studio

Prisma Studio opens at http://localhost:5555. It is the equivalent of SQL Server Management Studio for your local database.

Start the Application

# Start the development server(s)
pnpm dev

In a monorepo, pnpm dev at the root typically starts all services via Turborepo. In a single-package project, it starts one process.

Expected result: the application is available at http://localhost:3000 (or the port specified in .env), hot reload is working (change a file and the browser updates), and no errors appear in the terminal.

If something does not work:

  1. Check the README — the Getting Started section should cover common issues
  2. Check that all required environment variables are set in .env
  3. Check that Docker is running and the database container is healthy: docker compose ps
  4. Ask in the team Slack channel — do not spend more than 30 minutes blocked on setup

Run the Tests

# Run unit tests
pnpm test

# Run tests in watch mode (while developing)
pnpm test --watch

# Run end-to-end tests
pnpm test:e2e

All tests should pass on a clean checkout. If they do not, that is a bug in the project, not a problem with your setup. Report it.

Make Your First Change

Before writing any real code, make a trivial change to verify the full cycle works:

  1. Create a new branch: git checkout -b chore/onboarding-hello-world
  2. Add a comment to any file: // onboarding check - remove me
  3. Run the linter: pnpm lint
  4. Run the tests: pnpm test
  5. Commit: git commit -m "chore: onboarding test commit"
  6. Push: git push -u origin chore/onboarding-hello-world
  7. Open a draft PR: gh pr create --draft --title "chore: onboarding test" --body "Testing my setup"
  8. Observe the CI checks run in the PR
  9. Close the PR without merging: gh pr close

If CI passes, your environment is correctly configured end-to-end.


Stage 4: First Contributions

Finding Your First Task

Ask your team lead for a “good first issue” — a low-stakes bug fix or small improvement that:

  • Does not require deep domain knowledge
  • Has a clear acceptance criterion
  • Touches a real part of the codebase (not a toy example)
  • Will be reviewed promptly

In GitHub Issues, look for the good-first-issue label.

The Task Workflow

# 1. Assign the issue to yourself
gh issue edit ISSUE_NUMBER --add-assignee @me

# 2. Create a branch from main
git checkout main && git pull
git checkout -b fix/brief-description-of-issue

# 3. Write the fix
# ... edit files ...

# 4. Run lint and tests
pnpm lint && pnpm test

# 5. Commit
git add specific-file.ts another-file.ts
git commit -m "fix: brief description of what was fixed

Fixes #ISSUE_NUMBER"

# 6. Push and open a PR
git push -u origin fix/brief-description-of-issue
gh pr create --title "fix: brief description" --body "
## Summary
- What changed and why

## Testing
- How you verified the fix

Fixes #ISSUE_NUMBER
"

Your First Deployment

Render deploys automatically from the main branch. To deploy your change:

  1. Get your PR approved and merged to main
  2. Render picks up the push automatically and starts a new deployment
  3. Watch the deployment in the Render dashboard
  4. Check the Render logs to confirm startup was clean
  5. Verify your change is visible in the production or staging environment

If Render has preview environments configured (common for frontend changes), your PR will have a preview URL — a live deployment of your branch. Share it in the PR for easier review.


People to Meet

In your first week, schedule 30-minute 1:1s with:

  • Team lead — project context, priorities, communication norms
  • The engineer who last touched the main service — architecture walkthrough, known issues, ongoing work
  • The engineer responsible for infrastructure/DevOps — how deployments work, how to read logs, who to call when production breaks
  • A frontend engineer (if you are primarily backend) — how the client consumes the API, what the pain points are
  • A backend engineer (if you are primarily frontend) — how the API is structured, where the business logic lives

These are not status meetings. Come with specific questions about the codebase. “Walk me through how a new user signs up, end-to-end” is a better question than “can you explain the architecture?”


30/60/90-Day Plan

Days 1–30: Orient

  • Complete all four stages of this checklist
  • Fix at least two issues from the backlog
  • Review at least five PRs from teammates
  • Read all existing ADRs in docs/decisions/
  • Run through the full test suite locally and in CI
  • Deploy at least one change to production
  • Complete any outstanding articles from this curriculum that are relevant to your current work

Success criteria: You can open the codebase, find where a given feature is implemented, make a change, test it, and get it deployed. You do not need to ask for help with any of those steps.

Days 31–60: Contribute

  • Own at least one feature from issue creation to production deployment
  • Write an ADR for any significant technical decision you make or are involved in
  • Contribute to or update at least one runbook
  • Identify and fix one test coverage gap
  • Propose one improvement to the team’s process or tooling (PR, ADR, or discussion)

Success criteria: You are shipping features independently. You know which architectural decisions are settled and which are open to discussion. You have opinions about the codebase that are informed by the code, not just the curriculum.

Days 61–90: Lead

  • Be the primary reviewer on at least three PRs
  • Lead a small technical initiative (refactor, new tooling, infrastructure improvement)
  • Conduct a knowledge transfer session on something you’ve learned
  • Update this onboarding checklist with anything that was missing or wrong
  • Mentor the next engineer who joins after you

Success criteria: You are a full contributor. You are adding to the team’s institutional knowledge, not just drawing on it.


Gotchas for .NET Engineers

Gotcha 1: pnpm install must be run before every tool that reads node_modules. In .NET, dotnet build restores NuGet packages automatically. In Node.js, pnpm dev or pnpm test does not automatically install packages if package.json has changed. If you pull new commits and see Cannot find module '...' errors, run pnpm install first. Make it a habit: git pull && pnpm install && pnpm dev.

Gotcha 2: Environment variables are not inherited from your terminal session in the dev server. In .NET, environment variables set in launchSettings.json are scoped to the debug session. In Node.js projects, environment variables come from the .env file (via dotenv or the framework’s config module). Changing a variable in your terminal session (export FOO=bar) usually does not affect the running dev server — update .env and restart the dev server instead.

Gotcha 3: The lockfile is a team contract; do not upgrade dependencies unilaterally. pnpm-lock.yaml is committed to the repository and defines exact versions for the entire team. If you run pnpm add or pnpm update without discussing it, you change the lockfile for everyone. Dependency updates should go through a PR with CI verification, not be done incidentally while working on a feature. Never run pnpm install --no-frozen-lockfile on a team project without understanding why the lockfile is out of sync.

Gotcha 4: VS Code extensions must be installed per machine, not per repository. In .NET, ReSharper or StyleCop settings travel with the solution or .editorconfig. VS Code extensions are user-level, not workspace-level. If you do not have the ESLint extension installed, linting errors will not appear in the editor — but they will still fail CI. Install all required extensions on day one. If the project has a .vscode/extensions.json file, VS Code will prompt you to install recommended extensions automatically.

Gotcha 5: Node.js processes do not restart automatically on .env changes. IIS can pick up web.config changes without a restart. .NET apps with Azure App Configuration can reload settings at runtime. Node.js reads process.env once at startup. If you change .env, restart the dev server. This is obvious but easy to forget when you have had the server running for hours.


Hands-On Exercise

This article is itself the exercise. Work through the checklist from top to bottom and check off each item as you complete it.

When you reach the end, do one additional task: identify something that was confusing, missing, or incorrect in this checklist, open a GitHub issue describing it, and fix it in a PR. This is how the onboarding experience improves — every person who goes through it makes it slightly better for the next person.


Quick Reference

Daily Commands

# Start your day
git pull && pnpm install && pnpm dev

# Before committing
pnpm lint && pnpm test

# Create a PR
gh pr create --title "..." --body "..."

# Check CI status
gh pr checks

# View production logs
render logs --service SERVICE_ID --tail

# Check Sentry for recent errors
# (open sentry.io in browser — no CLI)

Branch Naming Conventions

feat/short-description       # new feature
fix/short-description        # bug fix
chore/short-description      # maintenance, dependency updates
docs/short-description       # documentation changes
refactor/short-description   # code changes with no behavior change

First-Week Checklist Summary

StageItemDone?
AccountsGitHub (with 2FA)[ ]
AccountsSonarCloud[ ]
AccountsSnyk[ ]
AccountsSemgrep[ ]
AccountsRender[ ]
AccountsSentry[ ]
AccountsClerk[ ]
AccountsAnthropic (Claude Code)[ ]
EnvironmentGit configured[ ]
Environmentfnm + Node.js 22[ ]
Environmentpnpm 9[ ]
EnvironmentDocker Desktop[ ]
EnvironmentVS Code + extensions[ ]
EnvironmentGitHub CLI[ ]
EnvironmentClaude Code[ ]
First runRepos cloned[ ]
First run.env configured[ ]
First runDatabase running and migrated[ ]
First runApplication starts[ ]
First runAll tests pass[ ]
First runTest PR cycle completed[ ]
First tasksFirst issue assigned[ ]
First tasksFirst PR opened[ ]
First tasksFirst change deployed[ ]

GitHub Issue Template

Copy this template when creating the onboarding issue for a new engineer. Assign it to them on their first day.

---
name: Engineer Onboarding
about: Onboarding checklist for new engineers joining the team
title: 'Onboarding: [Engineer Name]'
labels: onboarding
assignees: ''
---

## Welcome

This issue tracks your onboarding progress. Work through each section in order.
Check off items as you complete them. If anything is unclear or wrong, leave a
comment — and fix it in a PR when you have the bandwidth.

## Stage 1: Accounts

- [ ] GitHub account created and 2FA enabled
- [ ] Added to GitHub organization by team lead
- [ ] SonarCloud account created and added to organization
- [ ] Snyk account created and added to organization
- [ ] Semgrep account created and added to organization
- [ ] Render account created and added to team
- [ ] Sentry account created and added to organization
- [ ] Clerk dashboard access granted
- [ ] Anthropic account created and API key saved to password manager

## Stage 2: Local Environment

- [ ] Git configured with name, email, and SSH key for GitHub
- [ ] fnm installed and Node.js 22 active
- [ ] pnpm installed (v9+)
- [ ] Docker Desktop installed and running
- [ ] VS Code installed with all required extensions
- [ ] GitHub CLI installed and authenticated
- [ ] Claude Code installed and authenticated

## Stage 3: First Run

- [ ] Main application repo cloned
- [ ] `pnpm install` succeeded
- [ ] `.env` file configured with development values (get from team lead)
- [ ] Database container running (`docker compose up -d postgres`)
- [ ] Migrations applied (`pnpm db:migrate`)
- [ ] Application runs locally (`pnpm dev`)
- [ ] All tests pass (`pnpm test`)
- [ ] Test PR cycle completed (create draft PR, observe CI, close PR)

## Stage 4: First Contributions

- [ ] First issue assigned and in progress
- [ ] First PR opened and reviewed
- [ ] First change merged and deployed
- [ ] 1:1 completed with team lead
- [ ] 1:1 completed with infrastructure/DevOps engineer
- [ ] 1:1 completed with at least one other team member

## 30-Day Goal

- [ ] Two issues fixed independently
- [ ] Five PRs reviewed
- [ ] ADRs and runbooks read
- [ ] One improvement to this onboarding checklist submitted as a PR

---

**Questions?** Leave a comment here or message the team in Slack.
**Blocked?** Tag the team lead in a comment — do not stay blocked for more than 30 minutes on setup issues.

Further Reading

The following articles in this curriculum are the highest-priority reading for your first week:

  • Article 1.1 — The Landscape: the full ecosystem map
  • Article 1.2 — Runtime fundamentals: event loop vs. CLR
  • Article 2.1 — TypeScript type system compared to C#
  • Article 4.1 — NestJS architecture: the ASP.NET Core Rosetta Stone
  • Article 8.5 — Debugging production issues: the Node.js playbook (what to do when something breaks)

External references:

Appendix A — Glossary: .NET Terms to JS/TS Terms

This table maps common .NET concepts, patterns, and tooling to their nearest equivalents in the modern JavaScript/TypeScript ecosystem. Entries are alphabetized by .NET term. Where no direct equivalent exists, the closest analog is listed with a clarifying note.


Full Mapping Table

.NET TermJS/TS EquivalentNotes
abstract classabstract class (TypeScript)TypeScript supports abstract natively. JS has no runtime enforcement — TS provides compile-time only.
Action / Func delegateFunction type literal / arrow function(x: number) => string is the TS equivalent of Func<int, string>.
ActionResult / IActionResultResponse (Web API) / return value + status code (NestJS)In NestJS, return the object and decorate with @HttpCode(). In Next.js, use NextResponse.
ADO.NETpg / mysql2 / better-sqlite3Low-level database driver packages. Equivalent raw SQL access layer.
API ControllerNestJS @Controller / Next.js Route HandlerNestJS uses class-based controllers. Next.js uses file-based route.ts exports.
appsettings.json.env / environment variablesdotenv loads .env files. Use zod or t3-env to validate and type env vars.
ASP.NET CoreExpress / Fastify / NestJS / Next.jsExpress is the closest analog. NestJS is the opinionated full-framework equivalent.
Assemblynpm package / ES moduleThe unit of distribution. Published via npm registry.
async / awaitasync / awaitIdentical syntax. JS async is single-threaded; no thread pool parallelism. Use Promise.all() for concurrency.
Attribute (C#)Decorator (@)TypeScript decorators are Stage 3. NestJS relies on them heavily. Behavior differs from C# attributes.
AutoMapperzod.transform() / plain mapping functions / ts-beltNo dominant library equivalent. Manual mapping functions or Zod schema transforms are idiomatic.
Background service / IHostedServiceWorker threads / BullMQ / setIntervalBullMQ (Redis-backed queues) is the production equivalent for background jobs.
BCrypt.Netbcrypt / argon2 npm packagesbcryptjs is pure JS. argon2 is preferred for new systems.
BlazorReact / Vue / SvelteBlazor (WASM) has no direct JS equivalent — it replaces JS with C#. React is the most comparable component model.
Build configuration (Debug/Release)NODE_ENV (development/production)Set via environment variable. Bundlers (Vite, webpack) tree-shake based on this value.
Builder patternMethod chaining / fluent APIsCommon in JS. Example: query.where(...).orderBy(...).limit(...) in Drizzle ORM.
Cancellation tokenAbortController / AbortSignalAbortController.signal is passed to fetch() and async operations for cancellation.
CQRS patternCQRS libraries / manual command/query separationNo dominant library. Nest has community CQRS module (@nestjs/cqrs). Often implemented manually.
Class library projectnpm package (local or published)A workspace package in a pnpm monorepo is the direct analog of a referenced class library.
ClaimsPrincipal / ClaimsJWT payload / session objectClaims live in the JWT payload or session store. Clerk, Auth.js expose typed user objects.
Code-first migrationsPrisma Migrate / Drizzle Kit push/generateBoth tools generate SQL migration files from the schema definition.
ConfigureServices (Startup)Module providers (NestJS) / middleware config (Express)NestJS @Module({ providers: [] }) is the DI registration point.
Connection stringDATABASE_URL environment variableConvention in the JS ecosystem. Prisma, Drizzle, and most ORMs read from DATABASE_URL.
Console applicationNode.js CLI script / tsx entrypointRun with node dist/index.js or tsx src/index.ts for ts-node style execution.
Controller (MVC)NestJS @Controller / Next.js Route Handler / Express routerDepends on framework. NestJS is the most direct analog for class-based MVC.
CORS middlewarecors npm package / built-in framework optionapp.use(cors({ origin: '...' })) in Express. NestJS has app.enableCors().
DateTime / DateTimeOffsetDate / Temporal (Stage 3) / dayjs / date-fnsJS Date is mutable and has quirks. date-fns or dayjs are standard libraries. Temporal is the modern replacement.
DbContext (EF Core)Prisma Client / Drizzle ORM instanceThe singleton database access object. Prisma Client is the direct analog.
DbSet<T>Prisma model accessor (prisma.user) / Drizzle table objectprisma.user.findMany() maps to dbContext.Users.ToList().
Decorator patternHigher-order functions / HOCs / class decoratorsHOFs are idiomatic in JS. Class-based decorators exist in TS/NestJS.
Dependency Injection (DI)NestJS DI container / manual constructor injectionNestJS has a full DI system. In plain Node/React, use module imports or React Context.
Dictionary<K,V>Map<K, V> / plain object Record<K, V>Map is the closest structural equivalent. Plain objects work for string keys.
dotnet CLInpm / pnpm / nest CLI / gh CLISee Appendix D for full command mapping.
dotnet ef (EF Core tools)prisma CLI / drizzle-kit CLIprisma migrate dev, drizzle-kit generate are the equivalent migration commands.
dotnet publishnpm run build / tsc / Vite buildProduces optimized output. For servers: tsc --outDir dist. For frontends: vite build.
dotnet restorepnpm installRestores/installs all declared dependencies.
dotnet runpnpm dev / tsx src/index.tsStarts the dev server with hot reload. tsx --watch for Node scripts.
dotnet testpnpm test / vitest run / jestRuns the test suite. Vitest is the modern standard.
dotnet watchnodemon / tsx --watch / vite dev serverFile watchers that restart on change. Vite’s HMR is near-instant.
DTOs (Data Transfer Objects)Zod schemas / TypeScript interfacesZod schemas validate at runtime and infer TS types. Interfaces are compile-time only.
Entity Framework CorePrisma / Drizzle ORM / TypeORMPrisma is the most popular. Drizzle is the lightweight, type-safe SQL alternative.
Enumconst enum / string union / enum (TS)Prefer string unions (`‘admin’
Environment (Development/Production)NODE_ENVSet at process start. process.env.NODE_ENV === 'production'.
Event sourcingEventStoreDB / custom event log in PostgreSQLNo dominant npm library. Community patterns exist using Postgres or Redis Streams.
ExceptionError / custom error subclassthrow new Error('message'). Custom errors: class NotFoundError extends Error {}.
Exception filterExpress error middleware / NestJS exception filterExpress: app.use((err, req, res, next) => {}). NestJS: @Catch() decorator.
Extension methodModule-level function / prototype extension (avoid)Idiomatic TS uses standalone utility functions. Prototype extension is discouraged.
FluentValidationZod / Yup / ValibotZod is dominant. Provides schema definition, parsing, and typed error messages.
Garbage CollectorV8 Garbage CollectorAutomatic. No manual control. Avoid memory leaks in closures and event listeners.
Generic type TGeneric type parameter <T>Identical syntax. TS generics are compile-time only; erased at runtime.
Guid / Guid.NewGuid()crypto.randomUUID() / uuid npm packagecrypto.randomUUID() is built into Node 19+. uuid package for older environments.
HttpClientfetch / axios / kyfetch is built into Node 18+. axios remains popular. ky is a modern wrapper.
HttpContextRequest / Response objectsExpress: req, res. Next.js: NextRequest, NextResponse. NestJS: injected via @Req().
IConfigurationprocess.env / t3-env / dotenvEnvironment variable access. t3-env adds Zod validation and TypeScript types.
IEnumerable<T>Iterable<T> / Array<T> / generator functionArrays are most common. Generators (function*) produce lazy iterables.
IHostBuilderNestJS NestFactory.create() / Express app initThe application bootstrap entry point.
ILogger / loggingpino / winston / consolapino is the highest-performance structured logger. consola for CLIs.
IMiddleware / middleware pipelineExpress middleware chain / NestJS middlewareapp.use(fn) in Express. NestJS has middleware, guards, interceptors, and pipes.
IOptions<T>Typed env config via t3-env or ZodNo built-in options pattern. t3-env provides the closest typed config experience.
IQueryable<T>Prisma query builder / Drizzle query builderLazy, chainable query builders. Not executed until awaited.
IServiceCollectionNestJS @Module({ providers })DI registration. NestJS modules declare what is injectable.
IServiceScopeNestJS request-scoped providers@Injectable({ scope: Scope.REQUEST }) creates a per-request provider instance.
Integration testSupertest + Vitest / Playwright API testssupertest sends HTTP requests to an Express/NestJS app in tests.
InterfaceTypeScript interface / type aliasTS interfaces are structural (duck typing). No runtime existence — compile-time only.
JWT (System.IdentityModel)jose / jsonwebtoken npm packagejose is the modern, Web Crypto-based library. jsonwebtoken is legacy but widely used.
LINQArray methods + Lodash / native array methodsmap, filter, reduce, find, some, every, flatMap. Lodash adds groupBy, orderBy, etc.
LINQ GroupByArray.prototype.reduce / lodash.groupBy / MapNo single built-in. Object.groupBy() landed in Node 21+ / ES2024.
LINQ SelectArray.prototype.map()Direct equivalent.
LINQ WhereArray.prototype.filter()Direct equivalent.
LINQ FirstOrDefaultArray.prototype.find()Returns undefined instead of null/default.
LINQ OrderByArray.prototype.sort() / lodash.orderByNative sort mutates in place. Use [...arr].sort() to avoid mutation.
List<T>Array<T>JS arrays are dynamic by default. No fixed-size array type.
Mediator pattern@nestjs/cqrs / custom event emitterEventEmitter in Node for basic pub/sub. NestJS CQRS module for formal mediator.
Memory cachelru-cache npm package / RedisIn-process cache: lru-cache. Distributed cache: Redis (via ioredis).
MiddlewareExpress middleware / NestJS middleware, guards, interceptorsExpress middleware is a function (req, res, next) => void.
Migration (EF Core)prisma migrate dev / drizzle-kit generateBoth generate SQL files tracked in source control.
Model bindingRequest body parsing / Zod parsingexpress.json() middleware parses JSON body. Zod validates and types the result.
Model validationZod / class-validator (NestJS)NestJS uses ValidationPipe + class-validator decorators for DTO validation.
MSTest / xUnit / NUnitVitest / JestVitest is the modern standard. API is compatible with Jest.
NamespaceES module (import/export)No direct equivalent. Modules replace namespaces. TypeScript namespace exists but is discouraged.
NuGetnpm registryPackage registry. pnpm add <package> installs from npm.
NuGet package referencepackage.json dependenciesDeclared in package.json. pnpm install resolves and installs.
Object initializer new Foo { Bar = 1 }Object literal { bar: 1 } / spreadTS uses plain object literals. Constructor call then property assignment is uncommon.
ORMPrisma / Drizzle / TypeORM / MikroORMPrisma and Drizzle are the modern choices. TypeORM is the legacy ActiveRecord-style option.
Pagination (manual)Cursor-based or offset pagination via ORMPrisma: take/skip for offset. cursor for keyset pagination.
Pattern matching (switch)switch / discriminated unions / match (ts-pattern)ts-pattern library provides exhaustive pattern matching similar to C# switch expressions.
Polly (resilience)axios-retry / p-retry / cockatielcockatiel is the most feature-complete resilience library (circuit breaker, retry, timeout).
readonlyreadonly (TS) / as constTS readonly on properties. as const freezes object types to literals.
Record<string, T>Record<string, T>Identical. TS has the same Record utility type.
ReflectionNo direct equivalentJS has no runtime type metadata by default. reflect-metadata polyfill used by NestJS/TypeORM.
Repository patternService class / Prisma repository patternOften implemented as a service wrapping Prisma calls. No enforced interface.
Response cachingHTTP cache headers / Cache-Control / RedisSet Cache-Control headers. Next.js has built-in fetch caching. Redis for server-side response cache.
RouteAttribute / routingExpress app.get('/path', handler) / Next.js file-based routingNestJS: @Get(':id'). Next.js: file system routing (app/users/[id]/route.ts).
Scaffoldnest generate / shadcn/ui CLI / create-next-appCode generators. nest g resource scaffolds a full CRUD module.
Sealed classfinal class (TS 5.x not yet) / discriminated unionNo native sealed in TS. Discriminated unions prevent unintended extension at the type level.
Secret Manager.env files / cloud secret stores (Doppler, Vault)Never commit .env with secrets. Use GitHub Actions secrets or Doppler for CI/CD.
Serilog / structured loggingpino / winstonpino outputs JSON structured logs by default. Use pino-pretty for dev formatting.
SignalRSocket.IO / WebSockets (ws) / Server-Sent EventsSocket.IO is the closest analog. Native WebSocket API available in Node 22+.
Singleton lifetimeModule-level singleton / NestJS default scopeIn NestJS, providers are singleton by default. In Node, module exports are cached singletons.
Solution file (.sln)pnpm workspace (pnpm-workspace.yaml)Groups multiple projects. pnpm workspaces replace the solution file concept.
static classPlain module (.ts file)A TS module with exported functions is the idiomatic static class replacement.
String interpolation $"..."Template literal `${value}`Identical concept, different syntax.
Swagger / OpenAPI@nestjs/swagger / zod-openapi / swagger-ui-expressNestJS has first-class Swagger support. Zod schemas can generate OpenAPI specs.
Task<T>Promise<T>Direct equivalent. async functions return Promise. No ValueTask equivalent.
Thread / ThreadPoolWorker threads (worker_threads module)Node is single-threaded by default. CPU-bound work offloaded to Worker threads.
try/catch/finallytry/catch/finallyIdentical syntax. Async errors require await inside try blocks.
Transient lifetimeScope.TRANSIENT (NestJS)@Injectable({ scope: Scope.TRANSIENT }) creates a new instance per injection.
Tuple (int, string)Tuple [number, string] (TS)TS tuples are typed array literals. Used heavily in React hooks (useState returns a tuple).
Type inferenceType inferenceTS infers types from assignments and return values. Explicit annotation often unnecessary.
Unit testVitest test / Jest testdescribe / it / expect API. Vitest is Vite-native and faster than Jest.
using statement (IDisposable)try/finally / using declaration (TC39 Stage 3)JS Stage 3 using keyword mirrors C#. Until stable, use try/finally or explicit .close().
var / implicit typingconst / let (TS infers)Prefer const. TS infers type from initializer. var exists in JS but is avoided.
ValueObject patternBranded types / z.brand() in ZodZod .brand() creates nominal types. Branded types add type-level identity without runtime cost.
Vertical slice architectureFeature-folder structure in NestJS / Next.jsOrganizing code by feature (/users/users.controller.ts, /users/users.service.ts) rather than by layer.
View / Razor PageReact component / Vue SFC / Next.js page.tsx / .vue files are the view layer. Server components in Next.js 13+ blend view and data.
ViewBag / ViewDataProps / component state / query paramsData passed to components via props in React/Vue. No global ViewBag concept.
Web API projectNestJS application / Next.js API routesNestJS is the full-framework equivalent. Next.js app/api/ for lighter API routes.
xUnit [Fact] / [Theory]Vitest it() / it.each()it.each([...]) is the parameterized test equivalent of [Theory] + [InlineData].

Quick-Reference: Lifetime Scope Equivalents

.NET DI LifetimeNestJS EquivalentNotes
SingletonDefault (no scope specified)Module-level singleton, shared across all requests.
ScopedScope.REQUESTNew instance per HTTP request.
TransientScope.TRANSIENTNew instance per injection point.

Quick-Reference: Data Access Layer Equivalents

.NET / EF CorePrismaDrizzle ORM
DbContextPrismaClient instancedrizzle(connection) instance
DbSet<User>prisma.userdb.select().from(users)
SaveChangesAsync()Implicit per operationImplicit per query
Migrations folderprisma/migrations/drizzle/migrations/
Include() (eager load)include: { posts: true }.leftJoin(posts, ...)
FromSqlRaw()prisma.$queryRawsql\SELECT …`` template tag

Last updated: 2026-02-18

Appendix B — Recommended Reading & Resources

Curated references for the modern JS/TS stack. Each category lists a maximum of five resources — only canonical, maintained, and authoritative sources. Difficulty is rated: Beginner, Intermediate, Advanced.


TypeScript

TitleURLDescriptionDifficulty
TypeScript Handbookhttps://www.typescriptlang.org/docs/handbook/intro.htmlThe official, comprehensive language reference maintained by the TypeScript team.Beginner–Intermediate
TypeScript Deep Divehttps://basarat.gitbook.io/typescript/Basarat Ali Syed’s free book covering TS internals, patterns, and gotchas for experienced developers.Intermediate
TypeScript Release Noteshttps://www.typescriptlang.org/docs/handbook/release-notes/overview.htmlOfficial per-version changelogs; essential for tracking new features and breaking changes.All levels
Type Challengeshttps://github.com/type-challenges/type-challengesGraded exercises for mastering advanced type-level programming in TypeScript.Intermediate–Advanced
Matt Pocock’s Total TypeScripthttps://www.totaltypescript.com/Structured video course and articles on practical TS patterns; widely regarded as the best paid course for working developers.Intermediate–Advanced

React

TitleURLDescriptionDifficulty
React Documentation (react.dev)https://react.dev/Official React docs rebuilt in 2023 with hooks-first examples and interactive sandboxes.Beginner–Intermediate
React Server Components RFChttps://github.com/reactjs/rfcs/blob/main/text/0188-server-components.mdThe original RFC explaining RSC design rationale; essential for understanding Next.js App Router.Intermediate–Advanced
useHookshttps://usehooks.com/Curated, well-explained collection of production-quality custom hooks.Intermediate
Tanner Linsley — TanStack Query Docshttps://tanstack.com/query/latest/docs/framework/react/overviewOfficial docs for TanStack Query (React Query); the standard for server-state management.Intermediate
React Patternshttps://reactpatterns.com/Reference for established component composition patterns; concise and scannable.Intermediate

Vue

TitleURLDescriptionDifficulty
Vue 3 Official Docshttps://vuejs.org/guide/introductionOfficial guide covering Composition API, reactivity, SFCs, and migration from Vue 2.Beginner–Intermediate
Vue 3 Migration Guidehttps://v3-migration.vuejs.org/Official list of all breaking changes from Vue 2 to Vue 3 with migration paths.Intermediate
Pinia Documentationhttps://pinia.vuejs.org/Official docs for Pinia, the official Vue state management library replacing Vuex.Beginner–Intermediate
VueUsehttps://vueuse.org/Collection of composable utilities; both a reference and a learning resource for Composition API patterns.Intermediate
Vue School — Vue 3 Masterclasshttps://vueschool.io/courses/the-vue-3-masterclassThe most comprehensive structured Vue 3 course; covers Composition API, Pinia, Vue Router, testing.Beginner–Intermediate

Next.js

TitleURLDescriptionDifficulty
Next.js Documentationhttps://nextjs.org/docsOfficial docs covering App Router, server/client components, routing, caching, and deployment.Beginner–Advanced
Next.js Learn (official tutorial)https://nextjs.org/learnHands-on project-based tutorial from Vercel; teaches App Router, Postgres, auth, and deployment.Beginner–Intermediate
Next.js GitHub Discussionshttps://github.com/vercel/next.js/discussionsOfficial forum for architecture questions, patterns, and caching behavior explanations from maintainers.Intermediate–Advanced
Vercel Blog — Architecture Postshttps://vercel.com/blogOfficial Vercel engineering posts on rendering strategies, caching, and Edge runtime trade-offs.Intermediate–Advanced
Lee Robinson’s YouTube Channelhttps://www.youtube.com/@leerobDX lead at Vercel; concise videos explaining Next.js and React internals.Intermediate

Nuxt

TitleURLDescriptionDifficulty
Nuxt Documentationhttps://nuxt.com/docsOfficial Nuxt 3 docs covering file-based routing, auto-imports, server routes, and composables.Beginner–Advanced
Nuxt Moduleshttps://nuxt.com/modulesOfficial module registry; the authoritative list of first-party and community Nuxt integrations.All levels
UnJS Documentationhttps://unjs.io/Docs for the underlying utilities powering Nuxt (H3, Nitro, ofetch, etc.); useful for understanding Nuxt internals.Intermediate–Advanced
Nuxt GitHub Discussionshttps://github.com/nuxt/nuxt/discussionsActive community Q&A from maintainers; often the fastest path to canonical answers.Intermediate
Daniel Roe’s Bloghttps://roe.dev/blogNuxt core team lead; posts on Nuxt architecture, composables, and advanced patterns.Intermediate–Advanced

NestJS

TitleURLDescriptionDifficulty
NestJS Documentationhttps://docs.nestjs.com/Official reference covering modules, DI, guards, interceptors, pipes, microservices, and WebSockets.Beginner–Advanced
NestJS Fundamentals (official course)https://courses.nestjs.com/Free official video course from the NestJS team; the fastest structured onboarding path.Beginner–Intermediate
NestJS Recipeshttps://docs.nestjs.com/recipes/prismaOfficial integration guides for Prisma, TypeORM, Swagger, CQRS, and other ecosystem tools.Intermediate
Trilon Bloghttps://trilon.io/blogBlog from NestJS core contributors; posts on advanced DI, performance, and architecture patterns.Advanced
Awesome NestJShttps://github.com/nestjs/awesome-nestjsCurated community list of NestJS libraries, tools, and examples; maintained by the team.All levels

PostgreSQL

TitleURLDescriptionDifficulty
PostgreSQL Official Documentationhttps://www.postgresql.org/docs/current/The authoritative reference for all PostgreSQL features, SQL syntax, and server configuration.All levels
The Art of PostgreSQLhttps://theartofpostgresql.com/Book by Dimitri Fontaine focused on using SQL effectively; covers advanced queries, indexing, and performance.Intermediate–Advanced
Use The Index, Lukehttps://use-the-index-luke.com/Free, database-vendor-inclusive reference on indexing and query performance; PostgreSQL examples throughout.Intermediate–Advanced
pganalyze Bloghttps://pganalyze.com/blogHigh-quality technical posts on PostgreSQL performance, EXPLAIN analysis, and configuration.Advanced
Supabase PostgreSQL Guideshttps://supabase.com/docs/guides/database/introductionPractical PostgreSQL guides covering RLS, full-text search, and JSON — useful even without Supabase.Beginner–Intermediate

Prisma

TitleURLDescriptionDifficulty
Prisma Documentationhttps://www.prisma.io/docs/Official reference for schema definition, client API, migrations, and relation handling.Beginner–Advanced
Prisma Data Guidehttps://www.prisma.io/dataguideIn-depth articles on database concepts, connection pooling, and data modeling — not Prisma-specific.Beginner–Intermediate
Prisma Examples Repositoryhttps://github.com/prisma/prisma-examplesOfficial collection of working examples across frameworks (NestJS, Next.js, Express, etc.).Beginner–Intermediate
Prisma Migrate Mental Modelhttps://www.prisma.io/docs/orm/prisma-migrate/understanding-prisma-migrateExplains shadow database, drift detection, and migration history — critical for production use.Intermediate–Advanced
Accelerate Documentationhttps://www.prisma.io/docs/acceleratePrisma’s connection pooler and global cache layer; relevant for serverless deployments.Intermediate

Drizzle ORM

TitleURLDescriptionDifficulty
Drizzle ORM Documentationhttps://orm.drizzle.team/docs/overviewOfficial docs for schema definition, queries, relations, and the Drizzle Kit migration CLI.Beginner–Advanced
Drizzle Studiohttps://orm.drizzle.team/drizzle-studio/overviewGUI for browsing and editing database data; accessed via drizzle-kit studio.Beginner
Drizzle Examples Repositoryhttps://github.com/drizzle-team/drizzle-orm/tree/main/examplesOfficial framework-specific examples for Next.js, NestJS, Bun, and others.Beginner–Intermediate
Drizzle GitHub Discussionshttps://github.com/drizzle-team/drizzle-orm/discussionsActive maintainer responses; canonical answers to edge cases in relation queries and migrations.Intermediate–Advanced
This Week In Drizzlehttps://orm.drizzle.team/blogOfficial blog with release notes and migration guides for new Drizzle versions.All levels

Testing

TitleURLDescriptionDifficulty
Vitest Documentationhttps://vitest.dev/Official docs for the Vite-native test runner; covers unit, integration, snapshot, and coverage.Beginner–Advanced
Playwright Documentationhttps://playwright.dev/docs/introOfficial docs for browser automation, E2E tests, API testing, and trace viewer.Beginner–Advanced
Testing Library Docshttps://testing-library.com/docs/Guides for @testing-library/react and @testing-library/vue; teaches accessibility-first testing.Beginner–Intermediate
Kent C. Dodds — Testing JavaScripthttps://testingjavascript.com/Structured paid course on the full JS testing spectrum from unit to E2E; widely used reference.Intermediate
Google Testing Blog — Test Pyramidhttps://testing.googleblog.com/2015/04/just-say-no-to-more-end-to-end-tests.htmlFoundational post on the test pyramid and cost trade-offs; applicable to any stack.Beginner–Intermediate

DevOps

TitleURLDescriptionDifficulty
GitHub Actions Documentationhttps://docs.github.com/en/actionsOfficial reference for workflow syntax, runners, secrets management, and marketplace actions.Beginner–Advanced
Docker Official Documentationhttps://docs.docker.com/Authoritative reference for Dockerfile authoring, Compose, and multi-stage builds.Beginner–Advanced
Render Documentationhttps://render.com/docsOfficial docs for deploying web services, workers, cron jobs, and managed databases on Render.Beginner–Intermediate
The Twelve-Factor Apphttps://12factor.net/Methodology for building portable, scalable services; directly applicable to Node.js deployments.Beginner–Intermediate
OpenTelemetry JavaScripthttps://opentelemetry.io/docs/languages/js/Official docs for adding distributed tracing and metrics to Node.js applications.Intermediate–Advanced

Security

TitleURLDescriptionDifficulty
OWASP Top 10 (2021)https://owasp.org/www-project-top-ten/The canonical reference for web application security risks; use as a checklist for any project.Beginner–Intermediate
Snyk Learnhttps://learn.snyk.io/Free, interactive security lessons covering injection, XSS, SSRF, and dependency vulnerabilities in JS/TS.Beginner–Intermediate
Helmet.js Documentationhttps://helmetjs.github.io/Reference for HTTP security headers middleware; should be applied to every Express/NestJS app.Beginner
OWASP Node.js Cheat Sheethttps://cheatsheetseries.owasp.org/cheatsheets/Nodejs_Security_Cheat_Sheet.htmlConcise, actionable security checklist specific to Node.js applications.Intermediate
Auth.js (NextAuth) Documentationhttps://authjs.dev/Official docs for the most widely-used authentication library in the Next.js ecosystem.Intermediate

Last updated: 2026-02-18

Appendix C — Stack Decision Log

Each entry documents a deliberate technology choice. The format is consistent: what was chosen, what was considered, why the choice was made, and what conditions would prompt reconsideration.


TypeScript

What we chose: TypeScript 5.x, strict mode enabled.

What we considered: Plain JavaScript, JSDoc type annotations over plain JS.

Why we chose it: TypeScript eliminates an entire category of runtime errors that .NET engineers expect the compiler to catch. It enables refactoring with confidence, produces better IDE tooling, and is now the default in every major framework (NestJS, Next.js, Nuxt). Strict mode is non-negotiable — it closes the loopholes that make partial TS adoption equivalent to untyped JS.

What we’d reconsider: If TC39 ships the Type Annotations proposal to Stage 4 and runtimes support inline type stripping natively, the build step for TS could be eliminated. This does not change our commitment to typed development, only the toolchain.


Node.js

What we chose: Node.js LTS (22.x as of 2026), with native TypeScript support via --experimental-strip-types or tsx for local development.

What we considered: Deno, Bun.

Why we chose it: Node.js has the largest package ecosystem, the deepest framework support (NestJS, Express, Fastify all target Node first), and the most mature production track record. Deno has a compelling security model but limited ecosystem adoption. Bun is fast but its Node compatibility layer still has edge cases in 2026 that create friction in production.

What we’d reconsider: Bun is the most likely successor as its compatibility surface matures. Deno becomes compelling if its npm compatibility is validated across the full dependency tree of our chosen frameworks.


React

What we chose: React 19 with the App Router (Next.js) for applications requiring server-side rendering, and Vite + React for client-only tooling applications.

What we considered: Solid.js, Preact, Svelte.

Why we chose it: React has the largest talent pool — .NET engineers transitioning to the JS stack will find more examples, answers, and community support than any alternative. Its component model maps well to the mental models .NET engineers bring from Blazor. React’s ecosystem (TanStack Query, shadcn/ui, Radix UI) is unmatched in depth.

What we’d reconsider: Solid.js for applications where render performance is a primary concern. Svelte for smaller tools where bundle size matters and the team can accept a smaller hiring pool.


Vue 3

What we chose: Vue 3 with the Composition API (<script setup>) for applications where the team has existing Vue investment (e.g., legacy upgrades from Vue 2).

What we considered: React (as the default alternative), Svelte.

Why we chose it: Vue 3’s Composition API is close enough to React hooks that cross-training is feasible. Single File Components (SFCs) are a natural fit for engineers used to .razor files — one file, one component. Vue’s progressive adoption model also allows us to wrap existing components without full rewrites.

What we’d reconsider: New greenfield projects default to React unless there is an existing Vue codebase or a specific team skill requirement. Vue’s ecosystem, while strong, is smaller than React’s.


Next.js

What we chose: Next.js 15 (App Router) for full-stack applications requiring SSR, SSG, or ISR with React.

What we considered: Remix, Astro, plain Vite + React with a separate API layer.

Why we chose it: Next.js is the dominant React meta-framework with the widest deployment support, the deepest ecosystem integrations (Prisma, Clerk, Vercel, Sentry all publish official Next.js guides), and the most active development. The App Router’s server component model resolves the data-fetching patterns that caused friction in the Pages Router.

What we’d reconsider: Remix for applications with complex nested routing and mutation-heavy forms, where its action/loader model is a cleaner fit. Astro for content-heavy sites where JS interactivity is minimal.


Nuxt

What we chose: Nuxt 3 for full-stack Vue applications.

What we considered: SvelteKit, plain Vite + Vue with a separate API layer.

Why we chose it: Nuxt 3 is the Vue equivalent of Next.js — it provides SSR, file-based routing, server routes (via Nitro), and auto-imports in a single package. For teams with Vue 3 investment, Nuxt removes the need to compose these concerns manually.

What we’d reconsider: If a project does not require SSR and Vue 3 is chosen purely for component model familiarity, plain Vite + Vue is simpler and avoids Nuxt’s additional abstraction layers (Nitro, auto-imports) which can obscure debugging.


NestJS

What we chose: NestJS 10 for backend API services.

What we considered: Express, Fastify, Hono, tRPC (for type-safe API without a framework).

Why we chose it: NestJS is the most directly transferable framework for .NET engineers. Its module/controller/service/DI architecture is a deliberate mirror of ASP.NET Core. The learning curve from .NET to NestJS is shorter than from .NET to Express. It ships with OpenAPI/Swagger support, a mature testing utilities layer, and first-class TypeScript.

What we’d reconsider: Hono for lightweight edge-deployed APIs where NestJS’s startup overhead and decorator-heavy model is a poor fit. tRPC for internal service-to-service APIs where the client and server are in the same monorepo and type-safe RPC eliminates the need for a REST contract.


Prisma

What we chose: Prisma ORM for data access in applications requiring a full ORM with migrations and an intuitive query API.

What we considered: Drizzle ORM, TypeORM, MikroORM, raw pg with queries.

Why we chose it: Prisma’s schema file and generated client produce a developer experience closer to EF Core than any alternative — a typed client derived from the schema, a migration CLI, and readable query results without manual mapping. The Prisma Studio GUI aids onboarding. For .NET engineers familiar with DbContext, prisma.user.findMany() is immediately legible.

What we’d reconsider: Drizzle ORM is preferred when: (a) the team wants SQL-first control over queries without ORM magic, (b) the application is deployed serverless and Prisma’s connection management adds latency, or (c) the N+1 query generation in Prisma relations becomes a performance bottleneck that cursor pagination does not resolve.


Drizzle ORM

What we chose: Drizzle ORM for data access in applications where SQL proximity, performance, and serverless compatibility are priorities.

What we considered: Prisma, Kysely, raw SQL with pg.

Why we chose it: Drizzle is type-safe without code generation — the schema is TypeScript, the queries are TypeScript, and the output type is inferred at the call site. It produces literal SQL with no hidden N+1 patterns. Its serverless compatibility (no connection pool management, works on edge runtimes) makes it the correct choice for Next.js API routes and Cloudflare Workers.

What we’d reconsider: Prisma for teams where the schema-driven generator workflow (auto-complete on prisma.user.*) and migration UX (prisma migrate dev) outweigh Drizzle’s verbosity. The two ORMs solve different parts of the developer experience.


PostgreSQL

What we chose: PostgreSQL 16 as the primary relational database.

What we considered: MySQL/MariaDB, SQLite (dev only), PlanetScale (MySQL-based).

Why we chose it: PostgreSQL is the most capable open-source relational database. It supports JSON columns, full-text search, array types, row-level security, CTEs, window functions, and advisory locks — features that MySQL requires workarounds for. Both Prisma and Drizzle have the deepest PostgreSQL support. Render provides managed PostgreSQL as a first-class product.

What we’d reconsider: MySQL if a client mandates it. SQLite remains the correct choice for local development requiring zero infrastructure and for embedded or single-file deployment scenarios (via Turso/libSQL in production).


pnpm

What we chose: pnpm 9 as the package manager.

What we considered: npm, Yarn (classic and Berry), Bun’s package manager.

Why we chose it: pnpm uses a content-addressable store that links packages rather than copying them, producing faster installs and dramatically smaller disk usage in monorepos. Its workspace protocol (workspace:*) and strict phantom dependency resolution prevents packages from importing modules they did not declare. This enforces the same explicitness .NET engineers expect from NuGet references.

What we’d reconsider: Bun’s built-in package manager if/when Bun runtime adoption in production reaches the point where the entire toolchain unification is net positive.


Vitest

What we chose: Vitest as the unit and integration test runner.

What we considered: Jest, Mocha, Node’s built-in node:test.

Why we chose it: Vitest is Jest-compatible (same describe/it/expect API, same mock system) but runs inside the Vite pipeline — no separate Babel transform, instant startup, native ESM support. For projects already using Vite (most React/Vue projects), Vitest requires no additional configuration. For NestJS projects, Vitest runs faster than Jest and handles TypeScript without ts-jest.

What we’d reconsider: Node’s built-in node:test runner is improving rapidly. If it reaches Jest API parity and Vitest’s ecosystem plugins (coverage, snapshot serializers) stabilize around it, the dependency could be eliminated.


Playwright

What we chose: Playwright for end-to-end browser testing and API integration testing.

What we considered: Cypress, Selenium, WebdriverIO.

Why we chose it: Playwright is maintained by Microsoft, supports all major browsers (Chromium, Firefox, WebKit) from a single API, runs tests in parallel by default, and has a superior async model compared to Cypress. Its request context supports API-layer testing without a browser, replacing the need for a separate HTTP integration test tool. The Playwright Trace Viewer provides debugging capabilities that Cypress cannot match.

What we’d reconsider: Cypress for teams where its real-time visual debugger and time-travel replay are higher priorities than cross-browser coverage or parallelism. Cypress’s component testing story is also strong for isolated React/Vue component tests.


Tailwind CSS

What we chose: Tailwind CSS v4 as the primary styling solution.

What we considered: CSS Modules, vanilla CSS with design tokens, styled-components, Emotion, UnoCSS.

Why we chose it: Tailwind eliminates the naming problem in CSS. For .NET engineers who are not CSS specialists, utility classes produce consistent, responsive UIs without the cognitive overhead of BEM naming, specificity conflicts, or dead style accumulation. v4’s CSS-first configuration (no tailwind.config.js) and native cascade layer support reduce boilerplate significantly. shadcn/ui is built on Tailwind, which drives further adoption.

What we’d reconsider: CSS Modules for teams where the utility-class model creates long className strings that obscure component structure. UnoCSS for monorepos where the fastest possible build time justifies the smaller community.


shadcn/ui

What we chose: shadcn/ui as the component primitive layer.

What we considered: Chakra UI, MUI (Material), Radix UI directly, Mantine, Headless UI.

Why we chose it: shadcn/ui is not a component library — it is a CLI that copies components into your codebase. This means components are owned and customizable without fighting library internals. It is built on Radix UI primitives (accessible by default) and styled with Tailwind. For .NET engineers used to owning their markup, this model is more intuitive than consuming a black-box component library. No runtime dependency on a component package version.

What we’d reconsider: Mantine or Chakra if the project requires a large number of complex data-display components (data grids, date pickers, complex charts) that shadcn/ui does not provide and building them from Radix primitives is not feasible within the project timeline.


GitHub

What we chose: GitHub for source control, issue tracking, PR workflow, and CI/CD via GitHub Actions.

What we considered: GitLab, Bitbucket, Azure DevOps.

Why we chose it: GitHub is the center of gravity for open-source JS/TS tooling — every library references GitHub issues, GitHub Discussions, and GitHub Releases. GitHub Actions has the largest marketplace of pre-built actions, and the gh CLI is the most capable Git-host CLI available. Integrations with Sentry, Snyk, SonarCloud, Vercel, Render, and Clerk all offer GitHub as the primary OAuth and webhook target.

What we’d reconsider: Azure DevOps for enterprise clients who have existing Azure contracts and require Active Directory integration. GitLab for teams requiring self-hosted SCM with no external data transfer.


GitHub Actions

What we chose: GitHub Actions for CI/CD pipelines.

What we considered: CircleCI, Jenkins, Azure Pipelines, Buildkite.

Why we chose it: GitHub Actions runs in the same environment as the repository with zero additional authentication setup. The YAML workflow syntax is consistent with how .NET engineers encounter it in Azure Pipelines. The marketplace provides actions for every tool in our stack (pnpm, Playwright, Prisma, Render deploy, Snyk, SonarCloud). Free tier minutes are sufficient for most projects.

What we’d reconsider: Buildkite for monorepos with long test suites requiring custom runner orchestration. Azure Pipelines for projects already hosted in Azure DevOps.


Render

What we chose: Render for hosting web services, background workers, and managed PostgreSQL.

What we considered: Vercel, Railway, Fly.io, AWS ECS, Azure App Service.

Why we chose it: Render’s pricing model is predictable, its managed PostgreSQL requires no infrastructure expertise, and its deploy-from-GitHub workflow is the simplest production deployment available without vendor lock-in to a specific framework (unlike Vercel, which optimizes for Next.js). For .NET engineers unfamiliar with container orchestration, Render eliminates the infrastructure surface area entirely while remaining Docker-compatible.

What we’d reconsider: Vercel for Next.js applications where edge caching, ISR, and regional deployments are critical — Render does not replicate Vercel’s edge network. Fly.io for latency-sensitive applications requiring global presence with custom Docker images. AWS ECS/Fargate once the team has DevOps capacity to manage it.


Sentry

What we chose: Sentry for error tracking and performance monitoring.

What we considered: Datadog, New Relic, Rollbar, Highlight.io.

Why we chose it: Sentry is the standard in the JS/TS ecosystem — every major framework (Next.js, Nuxt, NestJS) provides official Sentry integration documentation. Its source map support for TypeScript stack traces is excellent. The free tier covers small applications. For .NET engineers accustomed to Application Insights, Sentry provides the same error aggregation, user impact assessment, and performance transaction tracing.

What we’d reconsider: Datadog for organizations that already use it for infrastructure monitoring and want a unified observability platform. OpenTelemetry with a self-hosted backend for teams with strong DevOps capacity who want to avoid per-event pricing.


Clerk

What we chose: Clerk for user authentication, session management, and user management UI.

What we considered: Auth.js (NextAuth), Supabase Auth, Firebase Auth, custom JWT implementation.

Why we chose it: Clerk provides hosted authentication with pre-built React components (sign-in, sign-up, user profile, organization management) and a managed user database. For .NET engineers building their first JS application, writing a secure auth system from scratch (PKCE, token rotation, session management) introduces significant risk. Clerk eliminates this entirely and integrates with Next.js, NestJS, and Remix via official SDKs. It supports SAML/SSO for enterprise clients without additional implementation.

What we’d reconsider: Auth.js for projects where hosting user data with a third party is not acceptable, or where the budget cannot support Clerk’s per-active-user pricing at scale. Supabase Auth if the project already uses Supabase as the database layer.


SonarCloud

What we chose: SonarCloud for static code analysis and code quality gates in CI.

What we considered: SonarQube (self-hosted), ESLint standalone, CodeClimate, DeepSource.

Why we chose it: SonarCloud integrates with GitHub PRs to enforce quality gates (coverage thresholds, complexity limits, duplication detection) without self-hosting infrastructure. For .NET engineers familiar with SonarQube, SonarCloud is the cloud-hosted equivalent. It covers TypeScript analysis, security hotspot detection, and cognitive complexity scoring — metrics that ESLint alone does not provide.

What we’d reconsider: DeepSource for faster PR feedback times. SonarQube self-hosted for organizations with data residency requirements. ESLint + custom rules for teams that want lightweight, opinionated linting without the SonarCloud dashboard overhead.


Snyk

What we chose: Snyk for software composition analysis (SCA) — dependency vulnerability scanning.

What we considered: Dependabot (GitHub native), npm audit, OWASP Dependency-Check, Mend (WhiteSource).

Why we chose it: Snyk provides actionable remediation advice, not just CVE listings. Its GitHub integration automatically opens PRs for vulnerable dependencies. It scans package.json, Dockerfiles, and IaC configurations in a single tool. For .NET engineers used to dotnet list package --vulnerable, Snyk is the direct analog with a richer UX.

What we’d reconsider: Dependabot alone for projects with minimal security compliance requirements — it is free and built into GitHub. Mend for enterprise clients requiring SLA-backed SCA with license compliance reporting.


Semgrep

What we chose: Semgrep for custom static analysis rules and SAST (Static Application Security Testing).

What we considered: CodeQL, ESLint security plugins, Snyk Code.

Why we chose it: Semgrep allows writing custom rules in a pattern language that mirrors source code syntax. This makes it possible to enforce project-specific conventions (e.g., “never use Math.random() for security-sensitive values,” “always validate input with Zod before database operations”) alongside community rulesets. Its OWASP and security rulesets detect injection, XSS, and path traversal patterns that ESLint cannot.

What we’d reconsider: CodeQL for projects requiring the deepest interprocedural data-flow analysis available. CodeQL’s query language has a steeper learning curve but catches vulnerabilities that pattern-matching tools miss.


Claude Code

What we chose: Claude Code (Anthropic) as the AI coding assistant integrated into the development workflow.

What we considered: GitHub Copilot, Cursor, Codeium, JetBrains AI.

Why we chose it: Claude Code operates as a command-line agent with read/write access to the repository, enabling multi-file refactors, architecture analysis, and document generation — not just single-line completions. For .NET engineers learning an unfamiliar ecosystem, the ability to ask “why does this behave differently from C#” and receive a contextual, codebase-aware answer accelerates the transition. Claude’s instruction-following precision on complex multi-step tasks (migrations, test generation, PR creation) reduces review burden.

What we’d reconsider: GitHub Copilot for teams where deep IDE integration (inline suggestions, test generation inside the editor) is the primary use case and terminal-based agent workflows are not adopted. Cursor for teams that want an AI-first IDE rather than a CLI-first agent.


Last updated: 2026-02-18

Appendix D — Command Cheat Sheet: dotnet CLI to npm/pnpm/gh/nest CLI

Commands are organized by category. The third column flags gotchas or behavioral differences worth noting. All pnpm commands assume a pnpm-workspace.yaml monorepo unless noted.


Project Management

dotnet CLIJS/TS EquivalentNotes
dotnet new console -n MyAppmkdir my-app && cd my-app && pnpm initNo single scaffold command for plain Node. Use tsx src/index.ts to run TS directly.
dotnet new webapi -n MyApipnpm dlx @nestjs/cli new my-apiCreates a full NestJS project with module/controller/service scaffold.
dotnet new react -n MyApppnpm create vite my-app -- --template react-tsVite is the standard React scaffold. create-next-app for full-stack.
dotnet new nextjspnpm dlx create-next-app@latest my-appPrompts for TypeScript, Tailwind, App Router, src directory.
dotnet new nuxtpnpm dlx nuxi@latest init my-appOfficial Nuxt scaffold CLI.
dotnet new sln -n MySolutionpnpm-workspace.yaml (manual creation)Add workspace globs: packages: ['apps/*', 'packages/*'].
dotnet sln add MyProjectAdd path to pnpm-workspace.yaml packages globNo CLI command — edit the YAML file directly.
dotnet sln remove MyProjectRemove path from pnpm-workspace.yamlEdit the YAML file directly.
dotnet new gitignorenpx gitignore nodeUses the gitignore npm package to fetch GitHub’s official .gitignore template.
dotnet new editorconfigCopy .editorconfig from templateNo scaffold command. ESLint + Prettier replace most EditorConfig concerns in JS.
nest new project-namepnpm dlx @nestjs/cli new project-nameNestJS equivalent of dotnet new webapi. Prompts for package manager.
nest generate module usersnest g module usersGenerates users.module.ts and updates app.module.ts.
nest generate controller usersnest g controller usersGenerates controller with route stubs and test file.
nest generate service usersnest g service usersGenerates injectable service and test file.
nest generate resource usersnest g resource usersGenerates full CRUD module: module, controller, service, DTOs, entity. Closest to dotnet scaffold.
nest generate middleware loggernest g middleware loggerGenerates NestJS middleware class.
nest generate guard authnest g guard authGenerates a NestJS guard (equivalent of ASP.NET Core authorization policy).
nest generate interceptor loggingnest g interceptor loggingGenerates a NestJS interceptor (equivalent of ASP.NET Core action filters).
nest generate pipe validationnest g pipe validationGenerates a NestJS pipe (equivalent of model binders + validators).

Building

dotnet CLIJS/TS EquivalentNotes
dotnet buildpnpm buildRuns the build script in package.json. Typically tsc for libraries, vite build for frontends.
dotnet build --configuration ReleaseNODE_ENV=production pnpm buildSet NODE_ENV before the build command. Vite and Next.js tree-shake based on this.
dotnet build --no-restorepnpm build (install is separate)In JS, restore (install) and build are always separate commands.
dotnet publishpnpm build then deploy artifactNo single “publish” command. Build outputs dist/ or .next/. Deploy from there.
dotnet publish -c Release -o ./outpnpm build && cp -r dist/ out/Configure output directory in vite.config.ts (build.outDir) or tsconfig.json (outDir).
dotnet cleanrm -rf dist .next .nuxt node_modules/.cacheNo single clean command. Add a clean script to package.json: "clean": "rimraf dist .next".
tsc (TypeScript compiler)pnpm tsc or pnpm exec tsctsc --noEmit for type-checking only without producing output.
tsc --watchpnpm tsc --watchType-checks incrementally on file change.
dotnet formatpnpm lint:fix + pnpm formatESLint with --fix for lint errors; Prettier for formatting. These are separate tools in JS.
(MSBuild target)package.json scripts sectionThe scripts block in package.json is the equivalent of MSBuild targets.

Testing

dotnet CLIJS/TS EquivalentNotes
dotnet testpnpm testRuns the test script. Typically vitest run or jest under the hood.
dotnet test --watchpnpm test --watch or vitestVitest interactive watch mode: vitest (no flags). Re-runs affected tests on change.
dotnet test --filter "TestName"vitest run --reporter=verbose -t "test name"-t filters by test name string or regex. Also: vitest run path/to/file.test.ts.
dotnet test --filter "Category=Unit"vitest run --project unitVitest workspaces allow named projects with separate configs.
dotnet test --collect:"Code Coverage"vitest run --coverageRequires @vitest/coverage-v8 or @vitest/coverage-istanbul installed.
dotnet test --logger "trx"vitest run --reporter=junitJUnit XML output for CI integration. Configure in vitest.config.ts.
dotnet test -vvitest run --reporter=verboseShows individual test names and results.
xunit [Fact]it('...', () => {})Single test case.
xunit [Theory][InlineData]it.each([...])('...', (...) => {})Parameterized tests.
xunit [BeforeAfterTest]beforeEach() / afterEach()Setup/teardown hooks.
Mock<T>() (Moq)vi.fn() / vi.mock('module')vi.fn() creates a spy. vi.mock() replaces an entire module with mocks.
new InMemoryDbContext()In-memory SQLite / prisma.$use() middleware mockPrisma has no official in-memory mode. Use SQLite with Prisma for integration tests.
dotnet test --no-buildvitest run --no-run-all-specsNot a direct equivalent. Vitest does not require a prior build step — it uses Vite’s transform pipeline.
(Playwright) dotnet playwright testpnpm exec playwright testRuns E2E tests. Configure in playwright.config.ts.
playwright codegen https://urlpnpm exec playwright codegen https://urlRecords a browser session and generates test code.
playwright show-reportpnpm exec playwright show-reportOpens the HTML test report in the browser.

Package Management

dotnet CLIJS/TS EquivalentNotes
dotnet restorepnpm installInstalls all dependencies declared in package.json.
dotnet add package Newtonsoft.Jsonpnpm add package-nameAdds to dependencies in package.json.
dotnet add package --version 13.0.1pnpm add package-name@13.0.1Pin exact version with @version.
dotnet add package xunit (dev dep)pnpm add -D package-name-D adds to devDependencies. Dev deps are not installed in production builds.
dotnet remove package Newtonsoft.Jsonpnpm remove package-nameRemoves from package.json and updates lockfile.
dotnet list packagepnpm listLists installed packages. Add --depth Infinity for the full tree.
dotnet list package --outdatedpnpm outdatedLists packages with newer versions available.
dotnet list package --vulnerablepnpm auditChecks for known CVEs. Also run snyk test for deeper SCA.
dotnet update packagepnpm updateUpdates all packages to latest within declared ranges.
dotnet update package PackageNamepnpm update package-nameUpdates a single package.
(NuGet pack)pnpm packCreates a .tgz tarball of the package for publishing.
(NuGet push)pnpm publishPublishes the package to npm. Requires --access public for scoped packages.
dotnet tool install -g dotnet-efpnpm add -g @nestjs/cliGlobal tool install. Note: prefer pnpm dlx for one-off tool use to avoid global pollution.
(workspace reference)pnpm add @myorg/shared --workspaceAdds a local workspace package as a dependency using workspace:* protocol.
dotnet new nuget.config.npmrc filePer-project npm configuration: registry, auth tokens, workspace settings.

Database / Migrations

Prisma

dotnet CLI / EF CorePrisma EquivalentNotes
dotnet ef migrations add InitialCreatepnpm prisma migrate dev --name initial-createCreates a new migration file in prisma/migrations/. Also applies it in dev.
dotnet ef database updatepnpm prisma migrate deployApplies pending migrations. Use in CI/CD and production — does not prompt.
dotnet ef migrations removepnpm prisma migrate resetResets dev DB and re-applies all migrations. Destructive — dev only.
dotnet ef database droppnpm prisma migrate resetDrops and recreates the database in dev.
dotnet ef dbcontext scaffoldpnpm prisma db pullIntrospects existing database and generates schema.prisma.
(generate client)pnpm prisma generateRegenerates the Prisma Client after schema changes. Required after every schema.prisma edit.
(open database browser)pnpm prisma studioOpens Prisma Studio GUI at localhost:5555.
(seed database)pnpm prisma db seedRuns the seed script defined in package.json under prisma.seed.
dotnet ef migrations listpnpm prisma migrate statusShows applied and pending migrations.
(validate schema)pnpm prisma validateChecks schema.prisma for syntax and relation errors.
(format schema)pnpm prisma formatAuto-formats schema.prisma.

Drizzle ORM

dotnet CLI / EF CoreDrizzle EquivalentNotes
dotnet ef migrations addpnpm drizzle-kit generateGenerates SQL migration files from schema changes. Does not apply them.
dotnet ef database updatepnpm drizzle-kit migrateApplies pending generated migration files to the database.
dotnet ef dbcontext scaffoldpnpm drizzle-kit introspectGenerates Drizzle schema TypeScript file from existing database.
(push schema without migration)pnpm drizzle-kit pushPushes schema directly to DB without migration files. Dev/prototyping only.
(open database browser)pnpm drizzle-kit studioOpens Drizzle Studio GUI.

Running & Debugging

dotnet CLIJS/TS EquivalentNotes
dotnet runpnpm devStarts the dev server with hot reload. Typically vite, next dev, nuxt dev, or nest start --watch.
dotnet run --project MyApipnpm --filter my-api devIn a pnpm monorepo, --filter targets a specific workspace package.
dotnet run --launch-profile httpspnpm dev --httpsFramework-dependent. Vite: add server.https to vite.config.ts. Next.js: --experimental-https.
dotnet watch runpnpm dev (already watching)Dev servers in JS watch by default. No separate watch subcommand needed.
dotnet run --environment ProductionNODE_ENV=production node dist/main.jsSet NODE_ENV and run the compiled output. Never use ts-node in production.
(attach debugger)node --inspect dist/main.jsOpens a Chrome DevTools debugging port. VS Code Node debugger attaches via launch.json.
dotnet run --urls https://localhost:7001PORT=7001 pnpm devMost dev servers read PORT from env. NestJS: app.listen(process.env.PORT ?? 3000).
(REPL)node / ts-node / tsxInteractive REPL. tsx runs TypeScript directly without a compile step.
(environment variables).env file + dotenv / t3-envdotenv is loaded explicitly in Node. Next.js and Nuxt load .env automatically.

Deployment

dotnet CLIJS/TS EquivalentNotes
dotnet publish -c Releasepnpm buildProduces the deployable artifact. For Node services: compiles TS to dist/.
(Docker build)docker build -t my-app .Write a Dockerfile. Use multi-stage builds: install deps, build, copy dist/ to final image.
(Docker run)docker run -p 3000:3000 my-appMap container port to host port.
(Azure deploy)gh workflow run deploy.ymlTrigger a GitHub Actions deployment workflow.
(Render deploy)Automatic on git push to mainRender detects pushes via GitHub webhook and deploys automatically if configured.
(environment config)Set env vars in Render dashboard / GitHub SecretsNever ship .env files. Use platform environment variable management.
(health check endpoint)GET /health routeNestJS: @nestjs/terminus for health checks. Next.js: app/api/health/route.ts.
(rollback)git revert HEAD && git push / Render manual rollbackRender retains previous deploys and supports one-click rollback via the dashboard.

Git & Version Control

dotnet CLI / VS toolinggh CLI / git EquivalentNotes
(clone repo)gh repo clone owner/repoClones and sets upstream automatically.
(create repo)gh repo create my-repo --publicCreates on GitHub and clones locally.
(create PR)gh pr create --title "..." --body "..."Opens a PR from the current branch to the base branch.
(view PRs)gh pr listLists open PRs for the current repo.
(checkout PR)gh pr checkout 123Checks out a PR branch locally by PR number.
(view PR status)gh pr view 123Shows PR details, checks, and review status.
(merge PR)gh pr merge 123 --squashMerges with squash strategy. Also --merge or --rebase.
(create issue)gh issue create --title "..." --body "..."Creates a GitHub issue from the CLI.
(view issues)gh issue listLists open issues. Add --assignee @me to filter.
(view CI checks)gh run listLists recent GitHub Actions workflow runs.
(view run logs)gh run view <run-id> --logStreams the log of a specific Actions run.
(re-run failed CI)gh run rerun <run-id> --failedRe-runs only the failed jobs in a workflow run.
(release tag)gh release create v1.0.0 --generate-notesCreates a GitHub Release with auto-generated changelog from merged PRs.
(browse repo)gh browseOpens the current repo in the browser.
(set secret)gh secret set MY_SECRETSets a GitHub Actions repository secret. Prompts for value.

Monorepo-Specific pnpm Commands

TaskCommandNotes
Run script in all packagespnpm -r run build-r (recursive) runs in every workspace package.
Run script in one packagepnpm --filter my-api run build--filter targets by package name from package.json.
Run script in changed packagespnpm --filter '...[HEAD~1]' run testOnly packages changed since the last commit. Requires Turborepo or manual setup.
Install dep in one packagepnpm --filter my-api add expressAdds to the specific package, not the root.
Install dep in rootpnpm add -w typescript-w (workspace root) installs at the monorepo root level.
List all workspace packagespnpm ls -r --depth -1Lists all declared workspace packages.
Run with Turborepopnpm turbo run buildTurborepo adds dependency graph-aware caching and parallelism to pnpm workspaces.

Miscellaneous / Environment

dotnet CLIJS/TS EquivalentNotes
dotnet --versionnode --version / pnpm --versionCheck runtime and package manager versions.
dotnet --list-sdksnvm list / fnm listfnm (Fast Node Manager) manages Node versions. Similar to nvm but faster.
dotnet new --listpnpm dlx create-next-app --helpNo unified template list. Each framework has its own scaffold CLI.
dotnet nuget sources.npmrc registry configurationSet registry=https://registry.npmjs.org/ or a private registry URL in .npmrc.
dotnet dev-certs httpsmkcert localhostmkcert generates locally-trusted TLS certificates for development.
dotnet user-secrets set.env.local fileNext.js loads .env.local automatically and git-ignores it by convention.
dotnet format --verify-no-changespnpm lint && pnpm format --checkUsed in CI to assert code is formatted. Prettier’s --check flag exits non-zero if changes needed.
dotnet ef --helppnpm prisma --help / pnpm drizzle-kit --helpTop-level help for the ORM CLI tool.

Quick Reference: Script Conventions in package.json

The following script names are conventional — not enforced — but used consistently across the ecosystem.

Script NameTypical Commanddotnet Analog
devvite / next dev / nest start --watchdotnet watch run
buildtsc / vite build / next builddotnet build -c Release
startnode dist/main.js / next startdotnet run (production)
testvitest run / jestdotnet test
test:watchvitestdotnet test --watch
test:e2eplaywright testdotnet test (Playwright)
linteslint src/(Roslyn analyzers)
lint:fixeslint src/ --fixdotnet format
formatprettier --write .dotnet format
format:checkprettier --check .dotnet format --verify-no-changes
typechecktsc --noEmit(implicit in dotnet build)
db:generateprisma generate / drizzle-kit generate(no analog)
db:migrateprisma migrate dev / drizzle-kit migratedotnet ef database update
db:studioprisma studio / drizzle-kit studio(no analog — use SSMS / Azure Data Studio)
db:seedtsx prisma/seed.ts(custom EF Core seed method)
cleanrimraf dist .nextdotnet clean

Last updated: 2026-02-18