Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

4.9 — Logging & Observability: Serilog to Pino + Sentry

For .NET engineers who know: Serilog, ILogger<T>, structured logging enrichers, Application Insights You’ll learn: How Pino provides structured logging in NestJS, how request ID correlation works in Node.js, and how Sentry covers the error tracking and performance monitoring role that Application Insights plays in .NET Time: 15-20 minutes


The .NET Way (What You Already Know)

Serilog is the standard structured logger for .NET. You inject ILogger<T> everywhere, configure sinks and enrichers once at startup, and emit structured log events that carry properties beyond the message string.

// C# — Serilog configuration in Program.cs
Log.Logger = new LoggerConfiguration()
    .MinimumLevel.Information()
    .MinimumLevel.Override("Microsoft", LogEventLevel.Warning)
    .Enrich.FromLogContext()
    .Enrich.WithCorrelationId()           // from Serilog.Enrichers.CorrelationId
    .Enrich.WithMachineName()
    .Enrich.WithEnvironmentName()
    .WriteTo.Console(new JsonFormatter())  // structured JSON to stdout
    .WriteTo.ApplicationInsights(          // push to Azure
        connectionString,
        TelemetryConverter.Traces)
    .CreateLogger();

builder.Host.UseSerilog();
// C# — usage throughout the application
public class OrdersService
{
    private readonly ILogger<OrdersService> _logger;

    public OrdersService(ILogger<OrdersService> logger)
    {
        _logger = logger;
    }

    public async Task<Order> PlaceOrderAsync(PlaceOrderCommand command)
    {
        _logger.LogInformation(
            "Placing order for customer {CustomerId} with {ItemCount} items",
            command.CustomerId,
            command.Items.Count);

        // ... business logic ...

        _logger.LogInformation(
            "Order {OrderId} placed successfully in {ElapsedMs}ms",
            order.Id,
            elapsed.TotalMilliseconds);

        return order;
    }
}

The key properties of Serilog that you rely on day-to-day:

  • Structured events{CustomerId} becomes a queryable property, not just a substring in a message
  • Context enrichmentLogContext.PushProperty() adds properties to all subsequent events in a scope
  • Request ID correlation — every log line for a request shares a RequestId property, making filtering in Application Insights trivial
  • Minimum level overrides — suppress Microsoft.* noise while keeping your own code at Debug

The Node.js equivalent covers all of this, with a different set of trade-offs.


The Node.js Way

NestJS Built-In Logger

NestJS ships with a Logger class that you can use immediately without configuration:

// TypeScript — NestJS built-in Logger
import { Injectable, Logger } from "@nestjs/common";

@Injectable()
export class OrdersService {
    private readonly logger = new Logger(OrdersService.name);

    async placeOrder(command: PlaceOrderCommand): Promise<Order> {
        this.logger.log(`Placing order for customer ${command.customerId}`);
        this.logger.warn(`Low inventory for SKU ${command.items[0]?.sku}`);
        this.logger.error("Order placement failed", error.stack);
        return order;
    }
}

The built-in logger writes colored, human-readable output to stdout. It is useful for local development. For production, you replace it with Pino — a structured logger that writes JSON, runs substantially faster, and integrates with log aggregation systems.

Pino: The Serilog of Node.js

Pino is the standard structured logger for production Node.js and NestJS. It is designed around a single performance constraint: logging should be fast enough to do on every request without measurable overhead. Internally, it serializes JSON in a worker thread (via pino-worker in older versions, natively since Pino 8) and uses a fast JSON serializer.

pnpm add pino pino-http nestjs-pino
pnpm add -D pino-pretty  # dev-only: human-readable output

nestjs-pino is the NestJS integration that replaces the built-in Logger with Pino and provides the @InjectPinoLogger() decorator:

// TypeScript — Pino configuration in AppModule
// src/app.module.ts
import { Module } from "@nestjs/common";
import { LoggerModule } from "nestjs-pino";
import { randomUUID } from "crypto";
import { Request, Response } from "express";

@Module({
    imports: [
        LoggerModule.forRoot({
            pinoHttp: {
                // Use pretty-print in development, JSON in production
                ...(process.env.NODE_ENV !== "production"
                    ? { transport: { target: "pino-pretty", options: { colorize: true } } }
                    : {}),

                // Log level: map environment to Pino levels
                level: process.env.LOG_LEVEL ?? "info",

                // Request ID generation — equivalent to Serilog's RequestId enricher
                genReqId: (req: Request, res: Response): string => {
                    // Honor forwarded request IDs from a load balancer or API gateway
                    const existing = req.headers["x-request-id"];
                    if (typeof existing === "string" && existing) return existing;
                    const id = randomUUID();
                    res.setHeader("x-request-id", id);
                    return id;
                },

                // Customize what gets logged per request
                customReceivedMessage: (req: Request) =>
                    `Incoming ${req.method} ${req.url}`,

                customSuccessMessage: (req: Request, res: Response, responseTime: number) =>
                    `${req.method} ${req.url} ${res.statusCode} — ${responseTime}ms`,

                // Redact sensitive fields from logs — equivalent to Serilog destructuring policies
                redact: {
                    paths: ["req.headers.authorization", "req.body.password", "*.creditCardNumber"],
                    censor: "[REDACTED]",
                },

                // Serializers: control how objects appear in log output
                serializers: {
                    req(req: Request) {
                        return {
                            id: req.id,
                            method: req.method,
                            url: req.url,
                            query: req.query,
                            // Do not log body by default — may contain PII
                        };
                    },
                    res(res: Response) {
                        return { statusCode: res.statusCode };
                    },
                },
            },
        }),
    ],
})
export class AppModule {}

Then tell NestJS to use the Pino logger globally:

// TypeScript — main.ts
import { NestFactory } from "@nestjs/core";
import { Logger } from "nestjs-pino";
import { AppModule } from "./app.module";

async function bootstrap(): Promise<void> {
    const app = await NestFactory.create(AppModule, { bufferLogs: true });
    app.useLogger(app.get(Logger));
    await app.listen(3000);
}
bootstrap();

bufferLogs: true holds startup log messages until Pino is initialized, so you don’t lose the first few lines of output to the default logger.

Using the Pino Logger in Services

// TypeScript — injecting and using Pino in a service
import { Injectable } from "@nestjs/common";
import { InjectPinoLogger, PinoLogger } from "nestjs-pino";

@Injectable()
export class OrdersService {
    constructor(
        @InjectPinoLogger(OrdersService.name)
        private readonly logger: PinoLogger,
    ) {}

    async placeOrder(command: PlaceOrderCommand): Promise<Order> {
        // Structured log — properties are queryable fields, not substrings
        this.logger.info(
            {
                customerId: command.customerId,
                itemCount: command.items.length,
            },
            "Placing order",
        );

        const start = Date.now();
        const order = await this.processOrder(command);

        this.logger.info(
            {
                orderId: order.id,
                customerId: command.customerId,
                elapsedMs: Date.now() - start,
            },
            "Order placed successfully",
        );

        return order;
    }
}

Compare the Pino call signature with Serilog:

// Serilog
_logger.LogInformation("Order {OrderId} placed in {ElapsedMs}ms", order.Id, elapsed.TotalMilliseconds);

// Pino
this.logger.info({ orderId: order.id, elapsedMs }, "Order placed successfully");

The structural intent is identical: you want orderId and elapsedMs as discrete, filterable fields in your log store. Pino’s signature puts the properties object first and the message string second. The emitted JSON looks like:

{
  "level": 30,
  "time": 1739875200000,
  "pid": 1,
  "hostname": "web-1",
  "reqId": "9f3a2b1c-4d5e-6f7a-8b9c-0d1e2f3a4b5c",
  "context": "OrdersService",
  "orderId": "ord_abc123",
  "customerId": "usr_xyz789",
  "elapsedMs": 47,
  "msg": "Order placed successfully"
}

Request ID Correlation

Serilog’s RequestId enricher adds the ASP.NET Core request ID to every log event within a request pipeline. Pino achieves the same result through pino-http: the request ID generated by genReqId is automatically attached to every log event that occurs during that request, because nestjs-pino binds the Pino child logger to the request context via AsyncLocalStorage.

This means you do not need to pass a request ID parameter through your service methods. Any this.logger.info(...) call inside a service — however deep in the call stack — automatically carries the reqId from the originating HTTP request:

// TypeScript — correlation is automatic, no parameter threading needed

// Controller receives request with reqId "abc-123"
@Get(":id")
async findOne(@Param("id") id: string) {
    return this.ordersService.findOne(id); // No need to pass request ID
}

// Service logs automatically include reqId "abc-123"
@Injectable()
export class OrdersService {
    async findOne(id: string): Promise<Order> {
        this.logger.info({ orderId: id }, "Fetching order"); // reqId is there automatically
        return this.repo.findById(id);
    }
}

This is implemented via Node.js AsyncLocalStorage — the equivalent of C#’s AsyncLocal<T> or the HTTP request scope in .NET’s DI container. nestjs-pino manages it for you.

Pino Log Levels

Pino uses numeric levels that map directly to Serilog’s semantic levels:

Serilog LevelPino LevelNumeric ValueUse Case
Verbosetrace10Fine-grained debugging, normally off in production
Debugdebug20Development debugging
Informationinfo30Normal operational events
Warningwarn40Degraded state, recoverable
Errorerror50Errors requiring attention
Fatalfatal60Unrecoverable errors, process exit imminent

Set the minimum level via LOG_LEVEL environment variable. Unlike Serilog, Pino does not support per-namespace minimum level overrides natively — for that, use the pino-loki or a similar transport that filters at the destination, or suppress noisy library logging at the serializer level.

Structured Logging Patterns

Three patterns worth establishing as conventions across the codebase:

1. Error logging — always include the error object:

// TypeScript — correct error logging
try {
    await this.processPayment(order);
} catch (err) {
    // Pass the error as a structured field, not as a string
    // Pino's error serializer extracts message, stack, type
    this.logger.error({ err, orderId: order.id }, "Payment processing failed");
}

Pino has a built-in error serializer that extracts message, type, and stack from an Error object into structured fields. If you log err.message as a string, you lose the stack trace. Always pass the Error object under the err key.

2. Duration tracking — log elapsed time as a number:

// TypeScript — duration as a number, not a string
const start = performance.now();
await externalService.call();
const durationMs = Math.round(performance.now() - start);

this.logger.info({ durationMs, service: "payment-gateway" }, "External call completed");

Log durations as milliseconds (integer). When filtered in a log aggregator like Grafana Loki or Datadog, numeric fields can be aggregated and graphed — avg(durationMs) by service — whereas "took 47ms" is just a string.

3. Business context — always include the domain entity ID:

// TypeScript — include entity IDs in every log
this.logger.info({ userId: user.id, action: "password_reset_requested" }, "User action");
this.logger.warn({ orderId: order.id, reason: "high_value" }, "Order flagged for review");

Every log event for an operation should carry the primary entity ID. This lets you filter all logs for a specific order or user when debugging a customer complaint — the same reason you put {OrderId} in every Serilog message template.

Log Aggregation on Render

On Render, all stdout output from your service is collected and available in the dashboard. For structured JSON logs, Render’s log viewer does basic filtering. For serious log querying, ship logs to a dedicated aggregator.

The most common lightweight setup for teams already using Sentry is to skip a dedicated log aggregator for most cases and rely on Sentry for error-level events. For info/warn logs, Render’s built-in retention (7 days on free, configurable on paid) is often sufficient for early-stage products.

For production-grade log aggregation, the two common choices are:

  • Grafana Loki — pairs with Grafana dashboards, cheap, label-based querying
  • Datadog — expensive, but Application Insights parity if your organization already uses it

Configure a Pino transport to ship to either:

pnpm add pino-loki  # for Grafana Loki
// TypeScript — Pino transport for Grafana Loki (production)
transport: {
    targets: [
        {
            target: "pino-loki",
            options: {
                host: process.env.LOKI_URL,
                labels: {
                    app: "my-api",
                    environment: process.env.NODE_ENV,
                },
                basicAuth: {
                    username: process.env.LOKI_USER,
                    password: process.env.LOKI_PASSWORD,
                },
            },
            level: "info",
        },
    ],
},

Sentry: Application Insights for the Node.js Stack

Application Insights does two things well: error tracking and performance monitoring (APM). Sentry covers both roles in the Node.js stack. The mental model is a direct substitution.

Application Insights FeatureSentry Equivalent
Exception tracking with stack tracesSentry.captureException()
Custom events / telemetrySentry.captureMessage(), custom breadcrumbs
Performance monitoring (requests, dependencies)Sentry Performance — automatic for HTTP, DB, queues
User context on errorsSentry.setUser()
Request context on errorsSentry.setContext()
ITelemetryProcessor (filter noise)beforeSend callback
Release trackingrelease option — tag errors to a deploy
Breadcrumbs (event timeline)Sentry breadcrumbs (automatic for HTTP and console)
Alert rulesSentry issue alerts and metric alerts

NestJS + Sentry Setup

pnpm add @sentry/node @sentry/profiling-node

Initialize Sentry before anything else in main.ts — before NestJS creates the application:

// TypeScript — main.ts: Sentry must be initialized first
import "./instrument"; // import before all other modules
import { NestFactory } from "@nestjs/core";
import { Logger } from "nestjs-pino";
import { AppModule } from "./app.module";

async function bootstrap(): Promise<void> {
    const app = await NestFactory.create(AppModule, { bufferLogs: true });
    app.useLogger(app.get(Logger));
    await app.listen(3000);
}
bootstrap();
// TypeScript — src/instrument.ts (Sentry initialization)
import * as Sentry from "@sentry/node";
import { nodeProfilingIntegration } from "@sentry/profiling-node";

Sentry.init({
    dsn: process.env.SENTRY_DSN,

    environment: process.env.NODE_ENV ?? "development",

    // Tag errors with the current git SHA or version
    release: process.env.SENTRY_RELEASE ?? process.env.GIT_SHA,

    integrations: [
        nodeProfilingIntegration(),
        // Automatic instrumentation for HTTP, pg, Redis, etc.
        // Sentry auto-detects installed packages
    ],

    // Sample 10% of transactions for performance monitoring in production
    // Capture all in development
    tracesSampleRate: process.env.NODE_ENV === "production" ? 0.1 : 1.0,

    profilesSampleRate: 1.0, // Profile the sampled transactions

    // Filter events before they are sent — equivalent to ITelemetryProcessor
    beforeSend(event, hint) {
        // Do not send 4xx errors to Sentry — they are expected application behavior
        const exception = hint?.originalException;
        if (exception instanceof Error) {
            const statusCode = (exception as any).status ?? (exception as any).statusCode;
            if (typeof statusCode === "number" && statusCode >= 400 && statusCode < 500) {
                return null; // Drop this event
            }
        }
        return event;
    },
});

Capturing Errors with Context

In your NestJS global exception filter (Article 1.8), add Sentry capture for 5xx errors:

// TypeScript — GlobalExceptionFilter with Sentry + Pino integration
import * as Sentry from "@sentry/node";
import { ExceptionFilter, Catch, ArgumentsHost, HttpException, HttpStatus } from "@nestjs/common";
import { InjectPinoLogger, PinoLogger } from "nestjs-pino";
import { Request, Response } from "express";

@Catch()
export class GlobalExceptionFilter implements ExceptionFilter {
    constructor(
        @InjectPinoLogger(GlobalExceptionFilter.name)
        private readonly logger: PinoLogger,
    ) {}

    catch(exception: unknown, host: ArgumentsHost): void {
        const ctx = host.switchToHttp();
        const request = ctx.getRequest<Request>();
        const response = ctx.getResponse<Response>();

        const status = this.resolveStatus(exception);
        const message = this.resolveMessage(exception);

        if (status >= 500) {
            // Log via Pino — will include the reqId correlation
            this.logger.error(
                { err: exception, path: request.url, method: request.method },
                "Unhandled exception",
            );

            // Send to Sentry with request context
            Sentry.withScope((scope) => {
                scope.setTag("endpoint", `${request.method} ${request.route?.path ?? request.url}`);
                scope.setContext("request", {
                    method: request.method,
                    url: request.url,
                    query: request.query,
                    requestId: request.headers["x-request-id"],
                });

                // If you have auth middleware that attaches user to request:
                const user = (request as any).user;
                if (user) {
                    scope.setUser({ id: user.id, email: user.email });
                }

                Sentry.captureException(exception);
            });
        }

        response.status(status).json({ statusCode: status, message });
    }

    private resolveStatus(exception: unknown): number {
        if (exception instanceof HttpException) return exception.getStatus();
        return HttpStatus.INTERNAL_SERVER_ERROR;
    }

    private resolveMessage(exception: unknown): string {
        if (exception instanceof HttpException) return exception.message;
        return "An unexpected error occurred.";
    }
}

Breadcrumbs are Sentry’s equivalent of Application Insights’ dependency tracking and custom events. Sentry automatically adds breadcrumbs for outbound HTTP requests (if you use node-fetch or axios), console log output, and database queries (if you use a supported ORM). You can also add manual breadcrumbs:

// TypeScript — manual breadcrumbs for business events
Sentry.addBreadcrumb({
    category: "payment",
    message: "Payment gateway called",
    data: { orderId, amount, currency, gateway: "stripe" },
    level: "info",
});

// When an error occurs, Sentry shows the timeline leading up to it
// including all breadcrumbs, so you can see exactly what happened

This is the equivalent of TelemetryClient.TrackDependency() or TelemetryClient.TrackEvent() in the Application Insights SDK.

User Context

Application Insights correlates telemetry with user sessions automatically when using the JavaScript SDK. In Sentry, set user context explicitly after authentication:

// TypeScript — set Sentry user context after auth middleware resolves
// src/common/middleware/sentry-user.middleware.ts
import { Injectable, NestMiddleware } from "@nestjs/common";
import * as Sentry from "@sentry/node";
import { Request, Response, NextFunction } from "express";

@Injectable()
export class SentryUserMiddleware implements NestMiddleware {
    use(req: Request, _res: Response, next: NextFunction): void {
        // Called after auth middleware has attached user to request
        const user = (req as any).user;
        if (user) {
            Sentry.setUser({
                id: user.id,
                email: user.email,
                username: user.username,
            });
        }
        next();
    }
}

Once set, every Sentry event captured during that request will include the user context, making it possible to look up “all errors this specific user encountered” — the same search you would run in Application Insights against user_Id.

Performance Monitoring: Distributed Tracing

Sentry’s performance monitoring creates a distributed trace across all services that participate in a request. If your NestJS API calls an external HTTP service, and both have Sentry initialized, Sentry links the traces using the sentry-trace header.

For custom operations you want to measure, wrap them in a span:

// TypeScript — manual performance span
import * as Sentry from "@sentry/node";

async function processLargeExport(exportId: string): Promise<void> {
    return Sentry.startSpan(
        {
            op: "export.process",
            name: "Process Large Export",
            attributes: { exportId, exportType: "csv" },
        },
        async (span) => {
            const rows = await this.fetchExportRows(exportId);
            span.setAttribute("rowCount", rows.length);

            await this.writeToStorage(rows);
        },
    );
}

This is equivalent to TelemetryClient.StartOperation() in the Application Insights SDK, which creates a custom dependency entry in the application map.


Key Differences

ConceptSerilog / Application InsightsPino / Sentry
Logger injectionILogger<T> via DI@InjectPinoLogger(ClassName.name) or new Logger(ClassName.name)
Structured properties{PropertyName} in message templateFirst argument object: { propertyName: value }
Request ID correlationRequestId enricher via LogContextpino-http + AsyncLocalStorage — automatic
Global log configurationLoggerConfiguration in Program.csLoggerModule.forRoot() in AppModule
Minimum level overrides.MinimumLevel.Override("Microsoft", Warning)Set at transport level — no per-namespace override in Pino
Log sinksSerilog sinks (Console, File, Seq, App Insights)Pino transports (console, pino-loki, pino-datadog)
RedactionDestructure.ByTransforming<T>()redact option with JSON path patterns
Error trackingApplication Insights exception telemetrySentry.captureException()
Performance monitoringApplication Insights APM, dependency trackingSentry Performance, distributed tracing
User context on errorsApplication Insights user correlation via JS SDKSentry.setUser() — explicit
SamplingApplication Insights adaptive samplingtracesSampleRate in Sentry.init()
Event filteringITelemetryProcessorbeforeSend callback
Error groupingApplication Insights exception groupingSentry issue fingerprinting
Release trackingcloud_RoleInstance, deployment annotationsrelease field in Sentry.init()
Local development outputConsole sink, human-readablepino-pretty transport

Gotchas for .NET Engineers

Gotcha 1: Pino’s Log Format Is { props } message, Not a Message Template

In Serilog, the message template is the primary unit — "Order {OrderId} placed" — and properties are named slots in that template. Pino reverses this: the object comes first and the message is a plain string with no property substitution.

// WRONG — trying to use Serilog-style message templates in Pino
this.logger.info("Order %s placed in %dms", order.id, elapsedMs);
// Pino supports printf-style interpolation but it defeats structured logging
// The values end up embedded in the message string, not as separate fields

// CORRECT — properties in the object, message is a plain label
this.logger.info({ orderId: order.id, elapsedMs }, "Order placed");

The output JSON from the wrong approach has "msg": "Order ord_abc123 placed in 47ms" — a string you cannot query on. The correct approach has "orderId": "ord_abc123" and "elapsedMs": 47 as separate fields you can filter, group, and aggregate.

If you are migrating a codebase from Serilog templates to Pino and you see message strings with embedded values everywhere, that is technical debt — not equivalent behavior.

Gotcha 2: Sentry beforeSend Must Be Synchronous — No await Inside It

The beforeSend callback is called synchronously by Sentry before it sends an event. If you put async logic inside it (querying a database to enrich the event, for example), the callback will return a Promise instead of the event or null. Sentry does not await that Promise — it will either treat it as a truthy event (unexpected behavior) or discard it.

// WRONG — async beforeSend
beforeSend: async (event, hint) => {
    const extra = await this.lookupAdditionalContext(hint.originalException);
    event.extra = { ...event.extra, ...extra };
    return event; // This is the resolved value, but beforeSend has already returned the Promise
},

// CORRECT — synchronous only; do enrichment in Sentry scopes at the capture site
beforeSend: (event, hint) => {
    const exception = hint?.originalException;
    if (exception instanceof HttpException && exception.getStatus() < 500) {
        return null; // Drop 4xx errors
    }
    return event;
},

For async enrichment, use Sentry.withScope() at the capture site and set context there with synchronous data that is already in scope.

Gotcha 3: pino-pretty in Production Destroys Your Log Aggregation

pino-pretty is a development tool that formats Pino’s JSON output as human-readable colored text. If it ends up in your production configuration — even accidentally through a shared config or a forgotten NODE_ENV check — every log line becomes an unstructured string. Your log aggregation tool cannot parse it. You lose all queryable fields.

The guard is straightforward:

// TypeScript — explicit environment check, always
transport: process.env.NODE_ENV !== "production"
    ? { target: "pino-pretty", options: { colorize: true } }
    : undefined,

Set NODE_ENV=production in your Render or Docker environment. Do not rely on it being set by the deploy platform — set it explicitly.

Gotcha 4: Application Insights Tracks More Automatically Than Sentry

Application Insights’ Node.js SDK (and the Azure Monitor OpenTelemetry distro) automatically instruments http, https, pg, mysql, redis, mongodb, and several other modules. Sentry does the same for the modules it explicitly supports, but the list is shorter and the level of automatic detail differs.

Specifically: Application Insights records every outbound HTTP request as a dependency with timing, status code, and URL. Sentry records outbound HTTP requests as breadcrumbs in the event timeline, but does not create standalone transactions for them unless they are triggered within a sampled transaction.

If your team relies on the Application Insights application map (the visual graph of service dependencies), you will need to set up OpenTelemetry separately if that level of detail matters. Sentry’s performance monitoring is excellent for identifying slow endpoints and tracing user-initiated flows, but it is not a drop-in replacement for the Application Insights application map.

Gotcha 5: Log Levels in NestJS Bootstrap vs. Application Logs

NestJS emits its own internal logs (module registration, dependency resolution, route mapping) during startup. With nestjs-pino, these go through Pino. If your LOG_LEVEL is set to warn or higher, you will suppress NestJS’s startup [NestFactory] and [RoutesResolver] info logs entirely. This is usually what you want in production, but it can cause confusion when debugging startup issues.

During production incident response, temporarily lower the log level:

# Render: set LOG_LEVEL environment variable
LOG_LEVEL=debug

NestJS also has a separate logger option in NestFactory.create() that controls which NestJS internal log categories appear. Do not confuse this with Pino’s level setting — they are separate configuration points.


Hands-On Exercise

Instrument a NestJS API with Pino structured logging, request ID correlation, and Sentry error tracking. By the end, every request should produce a correlated log trail and any unhandled error should appear in Sentry with user context and a clean stack trace.

Prerequisites: A NestJS project with at least one controller and service, and a Sentry account with a Node.js project DSN.

Step 1 — Install dependencies

pnpm add pino pino-http nestjs-pino @sentry/node @sentry/profiling-node
pnpm add -D pino-pretty

Step 2 — Create src/instrument.ts

Initialize Sentry with your DSN, set tracesSampleRate: 1.0 for development, and add a beforeSend that drops 4xx errors.

Step 3 — Import instrument.ts as the first line of main.ts

Verify Sentry is initialized before NestJS bootstrap. Check the Sentry dashboard to confirm the client connected (look for a “checkin” event on startup).

Step 4 — Configure Pino in AppModule

Add LoggerModule.forRoot() with a genReqId that honors x-request-id headers. Configure pino-pretty for development. Add redact paths for req.headers.authorization and req.body.password.

Step 5 — Replace the built-in logger in main.ts

Call app.useLogger(app.get(Logger)) after Pino is initialized. Confirm NestJS startup logs now appear in JSON format in production mode or colored in development.

Step 6 — Inject PinoLogger into a service

Replace any console.log or new Logger() usage with @InjectPinoLogger(ServiceName.name). Emit at least one info log with a structured properties object.

Step 7 — Wire Sentry into the global exception filter

In your GlobalExceptionFilter, add Sentry.captureException() for 5xx errors. Use Sentry.withScope() to attach the request path and user context.

Step 8 — Test correlation

Make an API call and grep the logs for the reqId value. Verify that the request log, any service logs, and any error logs all carry the same reqId. Then trigger a 500 error (throw from a service) and verify it appears in Sentry with a readable stack trace.

Stretch goal: Add a Sentry performance span around one slow operation in your service and verify it appears in the Sentry Performance dashboard with correct timing.


Quick Reference

Serilog / App Insights ConceptPino / Sentry EquivalentNotes
ILogger<T> injection@InjectPinoLogger(ClassName.name)From nestjs-pino
_logger.LogInformation(template, args)this.logger.info({ props }, "message")Properties first, message second
_logger.LogError(ex, template, args)this.logger.error({ err }, "message")Pass Error object as err field
LogContext.PushProperty() enrichmentAsyncLocalStorage via pino-httpAutomatic for request context
RequestId enrichergenReqId in pinoHttp configAuto-attached to all logs in the request
MinimumLevel.Override("X", Warning)No per-namespace override in PinoSet at transport or filter downstream
WriteTo.Console(new JsonFormatter())Pino default output is JSONUse pino-pretty for dev only
WriteTo.ApplicationInsights()pino-loki, pino-datadog transportsOr skip and rely on Sentry
Destructure.ByTransforming<T>()serializers option in pinoHttpPer-key transform functions
Enrich.WithMachineName()pid, hostname in Pino output — automaticPino includes these by default
TelemetryClient.TrackException()Sentry.captureException(err)Direct equivalent
TelemetryClient.TrackEvent()Sentry.captureMessage() + breadcrumbsBreadcrumbs for timeline events
TelemetryClient.TrackDependency()Sentry.startSpan()Manual spans for custom operations
TelemetryClient.TrackRequest()Automatic via Sentry HTTP integrationNo manual call needed
ITelemetryProcessorbeforeSend in Sentry.init()Synchronous only
User context (AuthenticatedUserTelemetryInitializer)Sentry.setUser({ id, email })Call after auth resolves
Application Insights application mapNo direct Sentry equivalentUse OpenTelemetry + Tempo/Jaeger
release annotation in App Insightsrelease: process.env.GIT_SHA in Sentry.init()Links errors to specific deploys
SamplingPercentageTelemetryProcessortracesSampleRate: 0.110% sampling = 0.1

Pino log levels:

logger.trace({ ... }, "msg"); // Level 10 — verbose debugging
logger.debug({ ... }, "msg"); // Level 20 — development debugging
logger.info({ ... }, "msg");  // Level 30 — normal operation
logger.warn({ ... }, "msg");  // Level 40 — recoverable issues
logger.error({ ... }, "msg"); // Level 50 — errors requiring attention
logger.fatal({ ... }, "msg"); // Level 60 — process exit imminent

Further Reading