Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

4B.3 — The Polyglot Decision Framework: Choosing the Right Backend for Each Service

For .NET engineers who know: ASP.NET Core API design, service architecture, and the trade-offs of adding infrastructure complexity to a system You’ll learn: A structured decision framework for choosing between NestJS, ASP.NET Core, and FastAPI — and the architectural patterns that make polyglot systems maintainable rather than chaotic Time: 15-20 minutes


The most common mistake polyglot teams make is not choosing the wrong technology — it is having no framework for making the choice at all. Individual engineers solve problems in the language they know best. Teams drift toward technology accumulation: one service in NestJS because the frontend team built it, one in .NET because the backend team had a deadline, one in Python because someone wanted to try FastAPI. Six months later, you have operational complexity without any of the intended benefits.

This article gives you the decision framework. It is not a technology tutorial. No “here’s how to install NestJS” — you have Article 4.1 for that. This is the architectural reasoning layer: when you are standing in front of a new service requirement, how do you choose?


The .NET Way (What You Already Know)

In a mature .NET shop, the technology choice is usually already made. You write C#. You use ASP.NET Core. You use EF Core and SQL Server. The ecosystem is cohesive, your team has deep experience, and switching frameworks for a single service is a significant organizational event.

The architectural decisions you’ve made in .NET are good ones: vertical slice architecture, CQRS patterns, domain-driven design — these are language-agnostic. What the .NET ecosystem gives you is a stable platform where you can apply those patterns without ecosystem churn.

// In .NET, you don't think much about "which backend to use"
// You think about architecture within the backend you have:
// Domain model, service layer, repository pattern, validation, etc.

public class OrderService : IOrderService
{
    private readonly IOrderRepository _repository;
    private readonly IPaymentGateway _payment;
    private readonly ILogger<OrderService> _logger;

    // The framework choice (ASP.NET Core) is settled.
    // The architecture question is: how do we structure this domain?
    public async Task<OrderResult> PlaceOrderAsync(PlaceOrderCommand command)
    {
        // Complex domain logic — exactly where .NET excels
        var order = Order.Create(command.CustomerId, command.Items);
        await _repository.SaveAsync(order);
        var paymentResult = await _payment.ChargeAsync(order.Total, command.PaymentToken);
        // ...
    }
}

The challenge arrives when you expand beyond pure .NET: you are now building a system that needs a TypeScript frontend, a Python ML component, and potentially a NestJS BFF layer. The question “which backend?” is no longer trivially answered by “the one we know.”


The TypeScript Stack Way

When your team has TypeScript, .NET, and Python capabilities, each new service is a decision point. The framework for making that decision has two parts: a decision matrix that maps factors to technology choices, and a set of architectural patterns that define how those choices fit together.

The Decision Matrix

FactorNestJS (TypeScript)ASP.NET Core (.NET)FastAPI (Python)
DomainWeb apps, BFF, CRUD APIs, event-driven servicesEnterprise logic, financial, high-throughput, regulatoryML inference, NLP, AI agents, embedding, data pipelines
Team expertiseTeam is building TypeScript fluency; monorepo already existsTeam has deep .NET experience; complex domain logic already in C#Feature requires ML libraries that only exist in Python
Type safetytRPC end-to-end (best); no code gen requiredOpenAPI codegen (good); NSwag generates C# and TS clientsOpenAPI codegen (good); FastAPI auto-generates from Pydantic
PerformanceGood for I/O-bound; single-threaded event loopBest for CPU-bound + high I/O; multi-threaded CLRBest for ML (C extensions bypass GIL); slower for plain web
Ecosystemnpm — massive but fragmented; validate packages carefullyNuGet — curated, enterprise-grade, stablePyPI — dominant for data science and ML; variable quality for web
HostingRender, Vercel, any Node-capable platformRender, Azure, any .NET-capable platformRender (CPU); GPU providers (Lambda Labs, Modal) for inference
Monorepo fitShares TypeScript types with frontend; tRPC works naturallySeparate repo or API boundary; OpenAPI is the bridgeSeparate repo or API boundary; OpenAPI is the bridge
Cold startFast (Node.js)Fast (AOT compiled)Slow for ML (model loading: 10-30 seconds on first request)

The Decision Rules

The matrix gives you factors. These rules give you the decision:

Rule 1: Default to TypeScript for new services.

If you can implement the service in TypeScript without significant trade-offs, do it in TypeScript. The monorepo type-sharing benefit is real and compounding: shared Zod schemas, shared constants, shared utility functions, tRPC type inference across the full stack. Every additional language in your system multiplies your operational surface area. Earn the additional complexity.

Rule 2: Keep .NET for what .NET already owns.

“We already have it in .NET” is a valid architectural reason to keep it in .NET. Migration for migration’s sake is waste. If you have a mature, battle-tested ASP.NET Core API with years of domain logic, carefully tuned EF Core queries, and production-proven performance characteristics, the TypeScript replacement must provide measurable benefit — not just stylistic consistency. See Article 4B.1 for the specific cases where .NET is the right choice.

Rule 3: Python only for capabilities that don’t exist elsewhere.

Python’s web frameworks are slower and less ergonomic than NestJS or ASP.NET Core for standard API work. The GIL limits true parallelism. Python services should be narrowly scoped: ML inference, NLP pipelines, LLM orchestration, embedding generation. If the feature doesn’t require PyTorch, Hugging Face, LangChain, or similar Python-dominant ML libraries, it should not be in Python. See Article 4B.2 for the full picture.

Rule 4: Minimize the number of languages in any given service boundary.

A service is one codebase deployed as one unit. That service should be one language. The polyglot architecture is about choosing the right language per service — not mixing languages inside a single service.


The Four Architectural Patterns

Every system you build with this stack will be a variant of one of these four patterns. Each is appropriate in specific circumstances.

Pattern 1: All-TypeScript (The Default)

graph TD
    B1["Browser"]
    N1["Next.js (React / Server Components)"]
    API1["NestJS API"]
    DB1["PostgreSQL"]
    B1 --> N1
    N1 -->|"tRPC (full type safety, no code gen)"| API1
    API1 -->|"Prisma"| DB1

When to use it: New greenfield projects without ML requirements and without significant existing .NET investment. Your team is building TypeScript fluency. You want maximum type safety with minimum infrastructure complexity.

The advantage: A single pnpm workspace contains both the Next.js frontend and the NestJS backend. A type change in a Prisma model flows through the NestJS service into the tRPC router and appears as a TypeScript error in the Next.js component — all in one tsc invocation. No API spec generation. No code generation pipelines. No contract tests.

When it breaks down: You need ML inference (Python required), you have a significant existing .NET codebase worth preserving, or you have CPU-bound compute requirements that exceed Node.js’s capabilities.

// All-TypeScript: tRPC router in NestJS calls Prisma, frontend calls tRPC
// packages/api/src/routers/orders.router.ts
export const ordersRouter = router({
  getById: protectedProcedure
    .input(z.object({ id: z.string().cuid() }))
    .query(async ({ input, ctx }) => {
      return ctx.prisma.order.findUniqueOrThrow({
        where: { id: input.id, userId: ctx.user.id },
        include: { items: true },
      });
    }),
});

// apps/web/src/app/orders/[id]/page.tsx
// The return type of getById is inferred automatically — no generated code
export default async function OrderPage({ params }: { params: { id: string } }) {
  const order = await api.orders.getById.query({ id: params.id });
  // order is fully typed from the Prisma query — no casting, no any
  return <OrderDetail order={order} />;
}

Pattern 2: TypeScript Frontend + .NET API

graph TD
    B2["Browser"]
    N2["Next.js (React / Server Components)"]
    API2["ASP.NET Core API"]
    DB2["SQL Server / PostgreSQL"]
    B2 --> N2
    N2 -->|"OpenAPI-generated TypeScript types\n(orval / openapi-typescript)"| API2
    API2 -->|"Entity Framework Core"| DB2

When to use it: You have an existing ASP.NET Core API with significant business logic that is not worth rewriting. The .NET backend is mature, performant, and well-tested. The frontend is new and benefits from Next.js’s SSR and developer experience.

The advantage: You preserve years of .NET investment while gaining a modern, server-rendered TypeScript frontend. Clerk handles auth on the frontend; the .NET API validates Clerk JWTs on the server. See Article 4B.1 for the full implementation.

The operational cost: Types do not flow automatically. You need an OpenAPI spec generation step in your .NET CI pipeline, and a type regeneration step in your frontend CI pipeline. Breaking changes in .NET endpoints will fail the frontend build — which is the intended behavior.

// ASP.NET Core — generate OpenAPI spec as part of CI
// Program.cs
builder.Services.AddSwaggerGen(options =>
{
    options.SwaggerDoc("v1", new OpenApiInfo { Title = "API", Version = "v1" });
    // Include XML comments for richer spec generation
    var xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
    options.IncludeXmlComments(Path.Combine(AppContext.BaseDirectory, xmlFile));
});

// CI step: dotnet build && dotnet swagger tofile --output openapi.json
// Next.js frontend — consume the generated OpenAPI types
// orval.config.ts generates TanStack Query hooks from the spec
// apps/web/src/hooks/use-order.ts (generated by orval)
export const useGetOrderById = (id: string) =>
  useQuery({
    queryKey: ["order", id],
    queryFn: () => getOrderById(id), // typed from OpenAPI spec
  });

Pattern 3: TypeScript Frontend + Python AI Service

graph TD
    B3["Browser"]
    N3["Next.js (React / Server Components + SSE streaming)"]
    FA["FastAPI (Python)\nPyTorch / Hugging Face"]
    NS["NestJS (TypeScript)\nPrisma / PostgreSQL"]
    B3 --> N3
    N3 -->|"REST + Server-Sent Events"| FA
    N3 -->|"OpenAPI-generated TypeScript types"| NS

When to use it: A specific feature in your application requires ML inference — a recommendation engine, a summarization endpoint, an embedding service, an LLM-powered chat feature — and the rest of the application is well-served by TypeScript.

The advantage: Python does what Python is uniquely good at. NestJS handles everything else. The frontend gets consistent types from both services via OpenAPI generation.

The operational cost: Two separate deployments with separate CI pipelines. FastAPI cold start for ML model loading. Contract management between two independently deployed services.

# FastAPI — auto-generates OpenAPI spec from Pydantic models
# api/routes/summarize.py
from pydantic import BaseModel
from fastapi import APIRouter
from fastapi.responses import StreamingResponse

class SummarizeRequest(BaseModel):
    text: str
    max_length: int = 150

class SummarizeResponse(BaseModel):
    summary: str
    token_count: int

router = APIRouter()

@router.post("/summarize", response_model=SummarizeResponse)
async def summarize(request: SummarizeRequest) -> SummarizeResponse:
    # Hugging Face Transformers — only possible in Python
    result = summarizer(request.text, max_length=request.max_length)
    return SummarizeResponse(
        summary=result[0]["summary_text"],
        token_count=len(result[0]["summary_text"].split())
    )
// Next.js — consume the FastAPI OpenAPI spec via openapi-typescript
// apps/web/src/app/api/summarize/route.ts (proxy route for auth)
import type { components } from "@/types/python-api"; // generated

type SummarizeResponse = components["schemas"]["SummarizeResponse"];

export async function POST(request: Request) {
  const body = await request.json();
  const token = await getAuthToken(request);

  const res = await fetch(`${process.env.PYTHON_API_URL}/summarize`, {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      Authorization: `Bearer ${token}`,
    },
    body: JSON.stringify(body),
  });

  const data: SummarizeResponse = await res.json();
  return Response.json(data);
}

Pattern 4: Full Polyglot with NestJS BFF

graph TD
    B4["Browser"]
    N4["Next.js (React / Server Components)"]
    BFF["NestJS BFF (Backend-for-Frontend)"]
    ASPNET["ASP.NET Core API\n(enterprise logic, financials, EF Core)"]
    PY["FastAPI\n(ML inference, embeddings, LLM)"]
    B4 --> N4
    N4 -->|"tRPC or typed REST"| BFF
    BFF -->|"REST"| ASPNET
    BFF -->|"REST"| PY

When to use it: You have multiple backends that serve different capabilities. The frontend complexity of calling two or three separate APIs with different auth schemes, different response shapes, and independent failure modes becomes unmanageable. The NestJS BFF consolidates this into a single, typed API surface.

The advantage: The frontend talks to one service, in one language, with one auth scheme. The NestJS BFF handles response aggregation, circuit breaking, and frontend-specific data shaping. If the Python ML service is down, the BFF can return degraded data rather than failing the entire request.

The operational cost: This is the most complex pattern — three (or more) deployments, three CI pipelines, contract management across all service boundaries, and a dedicated team to maintain the BFF. Do not adopt this pattern until you have exhausted simpler options.

// NestJS BFF — aggregates data from .NET and Python backends
// apps/bff/src/orders/orders.service.ts
@Injectable()
export class OrdersService {
  constructor(
    private readonly dotnetClient: DotnetApiClient,     // typed HTTP client to .NET API
    private readonly pythonClient: PythonApiClient,     // typed HTTP client to FastAPI
  ) {}

  async getOrderWithInsights(orderId: string, userId: string) {
    // Fan out to both backends in parallel
    const [order, insights] = await Promise.allSettled([
      this.dotnetClient.orders.getById(orderId),
      this.pythonClient.analytics.getOrderInsights(orderId),
    ]);

    return {
      // Always present — from reliable .NET API
      order: order.status === "fulfilled" ? order.value : null,
      // Gracefully degraded — Python ML service may be slow or unavailable
      insights: insights.status === "fulfilled" ? insights.value : null,
    };
  }
}

Key Differences

Decision dimension.NET mindsetPolyglot mindset
Default technologyAlways C# / ASP.NET CoreTypeScript by default; .NET or Python earned by specific need
Team costOne language, unified expertiseEach additional language requires additional operational depth
Type sharingStrong (C# everywhere)Strongest with TypeScript (tRPC); contract-based with .NET/Python (OpenAPI)
Deployment unitsOne deployable per serviceSame — but with more services and more CI pipelines
Failure isolationService-level (you’re already doing this)Pattern 4 (BFF) adds circuit breaking at the aggregation layer
Migration pathRarely migrate the runtimeExplicitly plan the boundary — what stays in .NET, what moves

Gotchas for .NET Engineers

Gotcha 1: Polyglot complexity compounds faster than you expect

Each additional language in your system multiplies your debugging surface area, your deployment complexity, and your hiring requirements. A bug that involves a type mismatch between a FastAPI response and a Next.js component requires tracing through Python Pydantic serialization, OpenAPI spec generation, TypeScript type generation, and React component props. In a mono-language system, that’s one runtime and one type system.

The rule is not “don’t go polyglot.” The rule is: the benefit must be measurable and specific. “Python is better for ML” is measurable and specific. “Python feels more modern” is not.

Cost of adding a language:
  + Additional CI pipeline
  + Different logging/monitoring setup
  + Different deployment configuration
  + Different error message formats
  + Different auth integration pattern
  + Different developer onboarding
  + Separate OpenAPI contract maintenance
  ─────────────────────────────────────
  Total: significant — earn it deliberately

Gotcha 2: The BFF pattern is often premature

Pattern 4 (NestJS BFF in front of .NET and Python) is the right answer for a specific problem: multiple backend services with different contracts, calling the same frontend. But teams frequently adopt it prematurely because it looks architecturally clean on a diagram.

Before adding a NestJS BFF layer, ask: can the frontend call the .NET API and the Python API directly, with separate TanStack Query hooks and different base URLs? If yes, that is simpler. A BFF earns its existence when the frontend requires aggregated responses that would require multiple round trips, when auth needs to be normalized across services, or when you need to shield the frontend from backend instability.

The BFF pattern adds a deployment, a CI pipeline, and a network hop. It should solve a real problem that simpler approaches cannot.

Gotcha 3: “Microservices” and “polyglot” are different decisions

Microservices is an architecture pattern about service boundaries, independent deployment, and scalability. Polyglot is a decision about which language to use within a given service boundary. You can have microservices in a single language. You can have a monolith with a polyglot sidecar.

The most common confusion: teams adopt polyglot architecture and microservices simultaneously, attributing all complexity to one or the other. Separate the decisions. Can you achieve the same outcome with microservices in a single language? If yes, do that first. Polyglot adds language complexity on top of service complexity.

Gotcha 4: Python services have operational characteristics that surprise .NET engineers

ML model loading is not like application startup. A Python service that loads a 1GB Hugging Face model takes 15-30 seconds to become ready. If Render restarts that service (scale-down on idle, deployment, health check failure), users experience that cold start. .NET services and NestJS services start in under a second.

Mitigation strategies: keep ML services on “always on” Render plans with no idle scaling; use health checks that only pass after the model is loaded; pre-download model weights at build time rather than at runtime; consider model serving frameworks like Triton or vLLM that manage model lifecycle separately from the HTTP layer.

Gotcha 5: OpenAPI contract drift is silent until it breaks production

When your .NET API changes a response shape and you do not regenerate the TypeScript types in your frontend, the mismatch is invisible until runtime. The generated types say one thing; the actual JSON says another. TypeScript’s type system was satisfied at compile time — the type was generated from the spec, the spec was generated from the code, the code changed, but the spec was stale.

This is why CI pipeline ordering matters. The correct order: .NET build and test → generate OpenAPI spec → publish spec artifact → frontend downloads spec → regenerate types → type-check frontend → build frontend. If the spec does not update, the frontend type-check does not catch new incompatibilities. Article 4B.4 covers this pipeline in detail.


Hands-On Exercise

This exercise builds the decision muscle, not a deployable artifact. It is a structured analysis you should do for your current or most recent project.

The exercise: Apply the decision framework to a real service you are planning or maintaining.

Step 1: Write down the service requirement in one sentence.

Example: “An API endpoint that accepts a user’s recent purchase history and returns three product recommendations.”

Step 2: Fill in the decision matrix.

For each factor in the matrix, write down the relevant facts:

Domain: "Recommendation engine" — ML classification problem
Team expertise: "Two .NET engineers, one Python data scientist"
Type safety: "Need it — recommendation results must be typed on the frontend"
Performance: "200ms SLA — acceptable for Python with preloaded model"
Ecosystem: "Feature requires scikit-learn collaborative filtering — Python only"
Monorepo fit: "Existing Next.js + NestJS monorepo — Python would be separate"

Step 3: Apply the decision rules.

Rule 1 (Default TypeScript): Can we do this in TypeScript?
  → No. scikit-learn has no TypeScript equivalent. Rule 1 does not apply.

Rule 2 (Keep .NET): Do we have existing .NET code to preserve?
  → No existing .NET code for recommendations. Rule 2 does not apply.

Rule 3 (Python for ML only): Is this Python for ML capability?
  → Yes. FastAPI for the recommendation endpoint, narrowly scoped.

Rule 4 (One language per service): FastAPI service, Python only.
  → Agreed.

Decision: FastAPI service for the recommendation endpoint.
Architecture: Pattern 3 (TypeScript frontend + Python AI service).

Step 4: Identify the integration contract.

# The FastAPI service auto-generates OpenAPI from Pydantic models
# curl http://localhost:8000/openapi.json > specs/recommendations-api.json

# The frontend generates TypeScript types from the spec
# pnpm run generate:types:recommendations

Step 5: Identify the operational costs you are accepting.

Write them down explicitly. “We accept: Python deployment, separate CI pipeline, model cold start, OpenAPI contract maintenance.” If the list feels too long for the benefit, reconsider the decision.


Quick Reference

Decision Flowchart

flowchart TD
    Start["New service or feature requirement"]
    Q1{"Does it require\nPython ML libraries?\n(PyTorch, Hugging Face,\nscikit-learn, LangChain, etc.)"}
    FastAPI["FastAPI (Pattern 3 or 4)"]
    Q2{"Does it extend an existing,\nmature .NET codebase?\n(significant business logic,\nEF Core models, .NET-specific integrations)"}
    ASPNET["ASP.NET Core (Pattern 2 or 4)"]
    Default["Default: NestJS TypeScript (Pattern 1 or 2)"]

    Start --> Q1
    Q1 -->|Yes| FastAPI
    Q1 -->|No| Q2
    Q2 -->|Yes| ASPNET
    Q2 -->|No| Default

When Each Pattern is Right

PatternUse When
1: All-TypeScriptNew project, no ML, no significant .NET legacy
2: TS frontend + .NET APIExisting .NET API worth preserving; .NET-specific requirements
3: TS frontend + Python AISpecific ML/AI feature needed; rest of system is TypeScript
4: NestJS BFF + multiple backendsMultiple backends with different contracts; frontend aggregation required

The Rule of Thumb

SituationRecommendation
Can TypeScript do it without significant trade-offs?Do it in TypeScript
Have mature .NET code that works?Keep it in .NET
Need ML/AI capabilities?Use FastAPI, narrow scope
Considering BFF?Only if you have multiple backends to aggregate
Team disagrees on language choice?Pick TypeScript, revisit in 6 months with data

Operational Cost Checklist (per additional language)

Before adding a language, confirm you have accounted for:

  • Separate CI/CD pipeline
  • Separate deployment configuration on Render
  • Separate logging/monitoring integration (Sentry, Pino/structlog)
  • OpenAPI contract generation and publication
  • Type regeneration pipeline in frontend CI
  • Contract drift detection (breaking change alerts)
  • Additional developer onboarding documentation
  • Separate error message format handling
  • Auth token forwarding across the language boundary

If you cannot check all items, you are not ready to add the language.


Further Reading

  • [Article 4B.1 — Keeping .NET as Your API] — The complete case for preserving your ASP.NET Core investment with a TypeScript frontend
  • [Article 4B.2 — Python as the Middle Tier] — When and how to use FastAPI for AI/ML services
  • [Article 4B.4 — Cross-Language Type Contracts: OpenAPI as the Universal Bridge] — The CI/CD pipeline that keeps polyglot systems type-safe
  • Martin Fowler — BFF Pattern — Sam Newman’s original BFF pattern write-up, still the definitive reference