Appendix C — Stack Decision Log
Each entry documents a deliberate technology choice. The format is consistent: what was chosen, what was considered, why the choice was made, and what conditions would prompt reconsideration.
TypeScript
What we chose: TypeScript 5.x, strict mode enabled.
What we considered: Plain JavaScript, JSDoc type annotations over plain JS.
Why we chose it: TypeScript eliminates an entire category of runtime errors that .NET engineers expect the compiler to catch. It enables refactoring with confidence, produces better IDE tooling, and is now the default in every major framework (NestJS, Next.js, Nuxt). Strict mode is non-negotiable — it closes the loopholes that make partial TS adoption equivalent to untyped JS.
What we’d reconsider: If TC39 ships the Type Annotations proposal to Stage 4 and runtimes support inline type stripping natively, the build step for TS could be eliminated. This does not change our commitment to typed development, only the toolchain.
Node.js
What we chose: Node.js LTS (22.x as of 2026), with native TypeScript support via --experimental-strip-types or tsx for local development.
What we considered: Deno, Bun.
Why we chose it: Node.js has the largest package ecosystem, the deepest framework support (NestJS, Express, Fastify all target Node first), and the most mature production track record. Deno has a compelling security model but limited ecosystem adoption. Bun is fast but its Node compatibility layer still has edge cases in 2026 that create friction in production.
What we’d reconsider: Bun is the most likely successor as its compatibility surface matures. Deno becomes compelling if its npm compatibility is validated across the full dependency tree of our chosen frameworks.
React
What we chose: React 19 with the App Router (Next.js) for applications requiring server-side rendering, and Vite + React for client-only tooling applications.
What we considered: Solid.js, Preact, Svelte.
Why we chose it: React has the largest talent pool — .NET engineers transitioning to the JS stack will find more examples, answers, and community support than any alternative. Its component model maps well to the mental models .NET engineers bring from Blazor. React’s ecosystem (TanStack Query, shadcn/ui, Radix UI) is unmatched in depth.
What we’d reconsider: Solid.js for applications where render performance is a primary concern. Svelte for smaller tools where bundle size matters and the team can accept a smaller hiring pool.
Vue 3
What we chose: Vue 3 with the Composition API (<script setup>) for applications where the team has existing Vue investment (e.g., legacy upgrades from Vue 2).
What we considered: React (as the default alternative), Svelte.
Why we chose it: Vue 3’s Composition API is close enough to React hooks that cross-training is feasible. Single File Components (SFCs) are a natural fit for engineers used to .razor files — one file, one component. Vue’s progressive adoption model also allows us to wrap existing components without full rewrites.
What we’d reconsider: New greenfield projects default to React unless there is an existing Vue codebase or a specific team skill requirement. Vue’s ecosystem, while strong, is smaller than React’s.
Next.js
What we chose: Next.js 15 (App Router) for full-stack applications requiring SSR, SSG, or ISR with React.
What we considered: Remix, Astro, plain Vite + React with a separate API layer.
Why we chose it: Next.js is the dominant React meta-framework with the widest deployment support, the deepest ecosystem integrations (Prisma, Clerk, Vercel, Sentry all publish official Next.js guides), and the most active development. The App Router’s server component model resolves the data-fetching patterns that caused friction in the Pages Router.
What we’d reconsider: Remix for applications with complex nested routing and mutation-heavy forms, where its action/loader model is a cleaner fit. Astro for content-heavy sites where JS interactivity is minimal.
Nuxt
What we chose: Nuxt 3 for full-stack Vue applications.
What we considered: SvelteKit, plain Vite + Vue with a separate API layer.
Why we chose it: Nuxt 3 is the Vue equivalent of Next.js — it provides SSR, file-based routing, server routes (via Nitro), and auto-imports in a single package. For teams with Vue 3 investment, Nuxt removes the need to compose these concerns manually.
What we’d reconsider: If a project does not require SSR and Vue 3 is chosen purely for component model familiarity, plain Vite + Vue is simpler and avoids Nuxt’s additional abstraction layers (Nitro, auto-imports) which can obscure debugging.
NestJS
What we chose: NestJS 10 for backend API services.
What we considered: Express, Fastify, Hono, tRPC (for type-safe API without a framework).
Why we chose it: NestJS is the most directly transferable framework for .NET engineers. Its module/controller/service/DI architecture is a deliberate mirror of ASP.NET Core. The learning curve from .NET to NestJS is shorter than from .NET to Express. It ships with OpenAPI/Swagger support, a mature testing utilities layer, and first-class TypeScript.
What we’d reconsider: Hono for lightweight edge-deployed APIs where NestJS’s startup overhead and decorator-heavy model is a poor fit. tRPC for internal service-to-service APIs where the client and server are in the same monorepo and type-safe RPC eliminates the need for a REST contract.
Prisma
What we chose: Prisma ORM for data access in applications requiring a full ORM with migrations and an intuitive query API.
What we considered: Drizzle ORM, TypeORM, MikroORM, raw pg with queries.
Why we chose it: Prisma’s schema file and generated client produce a developer experience closer to EF Core than any alternative — a typed client derived from the schema, a migration CLI, and readable query results without manual mapping. The Prisma Studio GUI aids onboarding. For .NET engineers familiar with DbContext, prisma.user.findMany() is immediately legible.
What we’d reconsider: Drizzle ORM is preferred when: (a) the team wants SQL-first control over queries without ORM magic, (b) the application is deployed serverless and Prisma’s connection management adds latency, or (c) the N+1 query generation in Prisma relations becomes a performance bottleneck that cursor pagination does not resolve.
Drizzle ORM
What we chose: Drizzle ORM for data access in applications where SQL proximity, performance, and serverless compatibility are priorities.
What we considered: Prisma, Kysely, raw SQL with pg.
Why we chose it: Drizzle is type-safe without code generation — the schema is TypeScript, the queries are TypeScript, and the output type is inferred at the call site. It produces literal SQL with no hidden N+1 patterns. Its serverless compatibility (no connection pool management, works on edge runtimes) makes it the correct choice for Next.js API routes and Cloudflare Workers.
What we’d reconsider: Prisma for teams where the schema-driven generator workflow (auto-complete on prisma.user.*) and migration UX (prisma migrate dev) outweigh Drizzle’s verbosity. The two ORMs solve different parts of the developer experience.
PostgreSQL
What we chose: PostgreSQL 16 as the primary relational database.
What we considered: MySQL/MariaDB, SQLite (dev only), PlanetScale (MySQL-based).
Why we chose it: PostgreSQL is the most capable open-source relational database. It supports JSON columns, full-text search, array types, row-level security, CTEs, window functions, and advisory locks — features that MySQL requires workarounds for. Both Prisma and Drizzle have the deepest PostgreSQL support. Render provides managed PostgreSQL as a first-class product.
What we’d reconsider: MySQL if a client mandates it. SQLite remains the correct choice for local development requiring zero infrastructure and for embedded or single-file deployment scenarios (via Turso/libSQL in production).
pnpm
What we chose: pnpm 9 as the package manager.
What we considered: npm, Yarn (classic and Berry), Bun’s package manager.
Why we chose it: pnpm uses a content-addressable store that links packages rather than copying them, producing faster installs and dramatically smaller disk usage in monorepos. Its workspace protocol (workspace:*) and strict phantom dependency resolution prevents packages from importing modules they did not declare. This enforces the same explicitness .NET engineers expect from NuGet references.
What we’d reconsider: Bun’s built-in package manager if/when Bun runtime adoption in production reaches the point where the entire toolchain unification is net positive.
Vitest
What we chose: Vitest as the unit and integration test runner.
What we considered: Jest, Mocha, Node’s built-in node:test.
Why we chose it: Vitest is Jest-compatible (same describe/it/expect API, same mock system) but runs inside the Vite pipeline — no separate Babel transform, instant startup, native ESM support. For projects already using Vite (most React/Vue projects), Vitest requires no additional configuration. For NestJS projects, Vitest runs faster than Jest and handles TypeScript without ts-jest.
What we’d reconsider: Node’s built-in node:test runner is improving rapidly. If it reaches Jest API parity and Vitest’s ecosystem plugins (coverage, snapshot serializers) stabilize around it, the dependency could be eliminated.
Playwright
What we chose: Playwright for end-to-end browser testing and API integration testing.
What we considered: Cypress, Selenium, WebdriverIO.
Why we chose it: Playwright is maintained by Microsoft, supports all major browsers (Chromium, Firefox, WebKit) from a single API, runs tests in parallel by default, and has a superior async model compared to Cypress. Its request context supports API-layer testing without a browser, replacing the need for a separate HTTP integration test tool. The Playwright Trace Viewer provides debugging capabilities that Cypress cannot match.
What we’d reconsider: Cypress for teams where its real-time visual debugger and time-travel replay are higher priorities than cross-browser coverage or parallelism. Cypress’s component testing story is also strong for isolated React/Vue component tests.
Tailwind CSS
What we chose: Tailwind CSS v4 as the primary styling solution.
What we considered: CSS Modules, vanilla CSS with design tokens, styled-components, Emotion, UnoCSS.
Why we chose it: Tailwind eliminates the naming problem in CSS. For .NET engineers who are not CSS specialists, utility classes produce consistent, responsive UIs without the cognitive overhead of BEM naming, specificity conflicts, or dead style accumulation. v4’s CSS-first configuration (no tailwind.config.js) and native cascade layer support reduce boilerplate significantly. shadcn/ui is built on Tailwind, which drives further adoption.
What we’d reconsider: CSS Modules for teams where the utility-class model creates long className strings that obscure component structure. UnoCSS for monorepos where the fastest possible build time justifies the smaller community.
shadcn/ui
What we chose: shadcn/ui as the component primitive layer.
What we considered: Chakra UI, MUI (Material), Radix UI directly, Mantine, Headless UI.
Why we chose it: shadcn/ui is not a component library — it is a CLI that copies components into your codebase. This means components are owned and customizable without fighting library internals. It is built on Radix UI primitives (accessible by default) and styled with Tailwind. For .NET engineers used to owning their markup, this model is more intuitive than consuming a black-box component library. No runtime dependency on a component package version.
What we’d reconsider: Mantine or Chakra if the project requires a large number of complex data-display components (data grids, date pickers, complex charts) that shadcn/ui does not provide and building them from Radix primitives is not feasible within the project timeline.
GitHub
What we chose: GitHub for source control, issue tracking, PR workflow, and CI/CD via GitHub Actions.
What we considered: GitLab, Bitbucket, Azure DevOps.
Why we chose it: GitHub is the center of gravity for open-source JS/TS tooling — every library references GitHub issues, GitHub Discussions, and GitHub Releases. GitHub Actions has the largest marketplace of pre-built actions, and the gh CLI is the most capable Git-host CLI available. Integrations with Sentry, Snyk, SonarCloud, Vercel, Render, and Clerk all offer GitHub as the primary OAuth and webhook target.
What we’d reconsider: Azure DevOps for enterprise clients who have existing Azure contracts and require Active Directory integration. GitLab for teams requiring self-hosted SCM with no external data transfer.
GitHub Actions
What we chose: GitHub Actions for CI/CD pipelines.
What we considered: CircleCI, Jenkins, Azure Pipelines, Buildkite.
Why we chose it: GitHub Actions runs in the same environment as the repository with zero additional authentication setup. The YAML workflow syntax is consistent with how .NET engineers encounter it in Azure Pipelines. The marketplace provides actions for every tool in our stack (pnpm, Playwright, Prisma, Render deploy, Snyk, SonarCloud). Free tier minutes are sufficient for most projects.
What we’d reconsider: Buildkite for monorepos with long test suites requiring custom runner orchestration. Azure Pipelines for projects already hosted in Azure DevOps.
Render
What we chose: Render for hosting web services, background workers, and managed PostgreSQL.
What we considered: Vercel, Railway, Fly.io, AWS ECS, Azure App Service.
Why we chose it: Render’s pricing model is predictable, its managed PostgreSQL requires no infrastructure expertise, and its deploy-from-GitHub workflow is the simplest production deployment available without vendor lock-in to a specific framework (unlike Vercel, which optimizes for Next.js). For .NET engineers unfamiliar with container orchestration, Render eliminates the infrastructure surface area entirely while remaining Docker-compatible.
What we’d reconsider: Vercel for Next.js applications where edge caching, ISR, and regional deployments are critical — Render does not replicate Vercel’s edge network. Fly.io for latency-sensitive applications requiring global presence with custom Docker images. AWS ECS/Fargate once the team has DevOps capacity to manage it.
Sentry
What we chose: Sentry for error tracking and performance monitoring.
What we considered: Datadog, New Relic, Rollbar, Highlight.io.
Why we chose it: Sentry is the standard in the JS/TS ecosystem — every major framework (Next.js, Nuxt, NestJS) provides official Sentry integration documentation. Its source map support for TypeScript stack traces is excellent. The free tier covers small applications. For .NET engineers accustomed to Application Insights, Sentry provides the same error aggregation, user impact assessment, and performance transaction tracing.
What we’d reconsider: Datadog for organizations that already use it for infrastructure monitoring and want a unified observability platform. OpenTelemetry with a self-hosted backend for teams with strong DevOps capacity who want to avoid per-event pricing.
Clerk
What we chose: Clerk for user authentication, session management, and user management UI.
What we considered: Auth.js (NextAuth), Supabase Auth, Firebase Auth, custom JWT implementation.
Why we chose it: Clerk provides hosted authentication with pre-built React components (sign-in, sign-up, user profile, organization management) and a managed user database. For .NET engineers building their first JS application, writing a secure auth system from scratch (PKCE, token rotation, session management) introduces significant risk. Clerk eliminates this entirely and integrates with Next.js, NestJS, and Remix via official SDKs. It supports SAML/SSO for enterprise clients without additional implementation.
What we’d reconsider: Auth.js for projects where hosting user data with a third party is not acceptable, or where the budget cannot support Clerk’s per-active-user pricing at scale. Supabase Auth if the project already uses Supabase as the database layer.
SonarCloud
What we chose: SonarCloud for static code analysis and code quality gates in CI.
What we considered: SonarQube (self-hosted), ESLint standalone, CodeClimate, DeepSource.
Why we chose it: SonarCloud integrates with GitHub PRs to enforce quality gates (coverage thresholds, complexity limits, duplication detection) without self-hosting infrastructure. For .NET engineers familiar with SonarQube, SonarCloud is the cloud-hosted equivalent. It covers TypeScript analysis, security hotspot detection, and cognitive complexity scoring — metrics that ESLint alone does not provide.
What we’d reconsider: DeepSource for faster PR feedback times. SonarQube self-hosted for organizations with data residency requirements. ESLint + custom rules for teams that want lightweight, opinionated linting without the SonarCloud dashboard overhead.
Snyk
What we chose: Snyk for software composition analysis (SCA) — dependency vulnerability scanning.
What we considered: Dependabot (GitHub native), npm audit, OWASP Dependency-Check, Mend (WhiteSource).
Why we chose it: Snyk provides actionable remediation advice, not just CVE listings. Its GitHub integration automatically opens PRs for vulnerable dependencies. It scans package.json, Dockerfiles, and IaC configurations in a single tool. For .NET engineers used to dotnet list package --vulnerable, Snyk is the direct analog with a richer UX.
What we’d reconsider: Dependabot alone for projects with minimal security compliance requirements — it is free and built into GitHub. Mend for enterprise clients requiring SLA-backed SCA with license compliance reporting.
Semgrep
What we chose: Semgrep for custom static analysis rules and SAST (Static Application Security Testing).
What we considered: CodeQL, ESLint security plugins, Snyk Code.
Why we chose it: Semgrep allows writing custom rules in a pattern language that mirrors source code syntax. This makes it possible to enforce project-specific conventions (e.g., “never use Math.random() for security-sensitive values,” “always validate input with Zod before database operations”) alongside community rulesets. Its OWASP and security rulesets detect injection, XSS, and path traversal patterns that ESLint cannot.
What we’d reconsider: CodeQL for projects requiring the deepest interprocedural data-flow analysis available. CodeQL’s query language has a steeper learning curve but catches vulnerabilities that pattern-matching tools miss.
Claude Code
What we chose: Claude Code (Anthropic) as the AI coding assistant integrated into the development workflow.
What we considered: GitHub Copilot, Cursor, Codeium, JetBrains AI.
Why we chose it: Claude Code operates as a command-line agent with read/write access to the repository, enabling multi-file refactors, architecture analysis, and document generation — not just single-line completions. For .NET engineers learning an unfamiliar ecosystem, the ability to ask “why does this behave differently from C#” and receive a contextual, codebase-aware answer accelerates the transition. Claude’s instruction-following precision on complex multi-step tasks (migrations, test generation, PR creation) reduces review burden.
What we’d reconsider: GitHub Copilot for teams where deep IDE integration (inline suggestions, test generation inside the editor) is the primary use case and terminal-based agent workflows are not adopted. Cursor for teams that want an AI-first IDE rather than a CLI-first agent.
Last updated: 2026-02-18