6.10 — Performance Monitoring & APM
For .NET engineers who know: Application Insights SDK, Live Metrics Stream, distributed tracing with W3C TraceContext, and Azure Monitor alerts You’ll learn: How Sentry covers both error tracking and performance monitoring for our stack, and where it maps to (and diverges from) Application Insights Time: 10-15 min read
The .NET Way (What You Already Know)
Application Insights is Microsoft’s APM solution. The Microsoft.ApplicationInsights.AspNetCore NuGet package instruments an ASP.NET Core application automatically — requests, dependencies (SQL, HTTP, Redis), exceptions, and custom events flow to Azure Monitor with minimal configuration:
// Program.cs — Application Insights in ASP.NET Core
builder.Services.AddApplicationInsightsTelemetry(options =>
{
options.ConnectionString = builder.Configuration["ApplicationInsights:ConnectionString"];
options.EnableAdaptiveSampling = true; // sample at high throughput
options.EnableQuickPulseMetricStream = true; // Live Metrics Stream
});
Application Insights gives you: a transaction search (find a specific request by ID), an application map showing service dependencies, distributed traces spanning multiple services via W3C TraceContext headers, performance metrics per endpoint (P50/P75/P95 response times), and SQL query durations attached to the traces that generated them. Alerting lives in Azure Monitor.
The core data model: a request is the root trace. Attached to it are dependencies (outbound SQL, HTTP, Redis calls) and exceptions. Custom telemetry (TelemetryClient.TrackEvent, TrackMetric) enriches the trace. All of it rolls up to a single trace ID you can use to reconstruct the full request path across services.
Our stack replaces Application Insights with Sentry. The data model is similar; the implementation differs.
Sentry vs. Application Insights — Conceptual Map
| Application Insights | Sentry Equivalent |
|---|---|
| Requests (server-side) | Transactions / Spans |
| Dependencies (SQL, HTTP, Redis) | Child spans within a transaction |
Exceptions / TelemetryClient.TrackException | Issues (errors + stack traces) |
Custom events (TrackEvent) | Custom spans or breadcrumbs |
Custom metrics (TrackMetric) | Measurements attached to spans |
| Live Metrics Stream | Sentry’s real-time event stream |
| Application Map | Sentry’s Trace Explorer (partial equivalent) |
| Sampling rate | tracesSampleRate |
| Smart detection / anomaly alerts | Sentry Alerts (metric-based) |
| Azure Monitor dashboards | Sentry Performance dashboard |
| Distributed tracing (W3C) | Sentry distributed tracing (W3C-compatible) |
| Connection string | DSN (Data Source Name) |
Where Application Insights is deeply integrated with Azure (RBAC, Log Analytics, KQL queries), Sentry is cloud-agnostic and works the same on Render, Vercel, or self-hosted infrastructure.
Setting Up Sentry Performance Tracing
NestJS (Backend)
pnpm add @sentry/node @sentry/profiling-node
Initialize Sentry at the very top of main.ts, before any other imports:
// main.ts — Sentry must be initialized before other imports
import * as Sentry from '@sentry/node';
import { nodeProfilingIntegration } from '@sentry/profiling-node';
Sentry.init({
dsn: process.env.SENTRY_DSN,
environment: process.env.NODE_ENV ?? 'development',
release: process.env.SENTRY_RELEASE ?? 'local',
// Performance tracing
tracesSampleRate: process.env.NODE_ENV === 'production' ? 0.1 : 1.0,
// 10% of requests traced in production; 100% in dev/staging
// Profiling (CPU profiling attached to traces)
profilesSampleRate: 0.1, // profile 10% of sampled transactions
integrations: [
nodeProfilingIntegration(),
// Automatically instruments HTTP, PostgreSQL, Redis, etc.
Sentry.prismaIntegration(),
],
// Filter out health check endpoints from tracing
tracesSampler: (samplingContext) => {
const url = samplingContext.request?.url ?? '';
if (url.includes('/health') || url.includes('/metrics')) {
return 0; // never trace health checks
}
return process.env.NODE_ENV === 'production' ? 0.1 : 1.0;
},
});
// Now import and initialize NestJS
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
async function bootstrap() {
const app = await NestFactory.create(AppModule);
// Sentry request handler — must be before routes
app.use(Sentry.Handlers.requestHandler());
app.use(Sentry.Handlers.tracingHandler());
await app.listen(process.env.PORT ?? 3000);
}
bootstrap();
Wire the Sentry exception handler in the NestJS exception filter to capture unhandled exceptions with full context:
// sentry-exception.filter.ts
import {
ArgumentsHost,
Catch,
ExceptionFilter,
HttpException,
HttpStatus,
} from '@nestjs/common';
import * as Sentry from '@sentry/node';
import { Request, Response } from 'express';
@Catch()
export class SentryExceptionFilter implements ExceptionFilter {
catch(exception: unknown, host: ArgumentsHost) {
const ctx = host.switchToHttp();
const req = ctx.getRequest<Request>();
const res = ctx.getResponse<Response>();
const status =
exception instanceof HttpException
? exception.getStatus()
: HttpStatus.INTERNAL_SERVER_ERROR;
// Only send 5xx errors to Sentry — 4xx are expected client errors
if (status >= 500) {
Sentry.withScope((scope) => {
scope.setTag('endpoint', `${req.method} ${req.route?.path ?? req.path}`);
scope.setUser({ id: (req as any).user?.id });
scope.setContext('request', {
method: req.method,
url: req.url,
params: req.params,
query: req.query,
// never log request body — may contain credentials
});
Sentry.captureException(exception);
});
}
res.status(status).json({
statusCode: status,
message: exception instanceof HttpException
? exception.message
: 'Internal server error',
});
}
}
Register the filter globally:
// main.ts (continued)
import { SentryExceptionFilter } from './filters/sentry-exception.filter';
app.useGlobalFilters(new SentryExceptionFilter());
Custom Performance Spans
To instrument a specific business operation (equivalent to TelemetryClient.StartOperation in Application Insights):
// product.service.ts — custom span for a slow operation
import * as Sentry from '@sentry/node';
@Injectable()
export class ProductService {
async importProducts(filePath: string): Promise<number> {
return Sentry.startSpan(
{
name: 'product.import',
op: 'task',
attributes: {
'file.path': filePath,
'import.type': 'csv',
},
},
async (span) => {
const rows = await this.parseCsv(filePath);
span.setAttribute('import.row_count', rows.length);
const inserted = await this.bulkInsert(rows);
span.setAttribute('import.inserted_count', inserted);
return inserted;
}
);
}
}
The span appears nested under the parent HTTP request trace in Sentry’s Trace Explorer.
Prisma Query Performance
The Sentry.prismaIntegration() initialization (included above) automatically captures all Prisma queries as spans attached to the active transaction. No additional configuration needed. Each span shows the query duration, the SQL model (not the full query by default — to avoid logging sensitive data), and whether it was a slow query.
To see which queries are slow: Sentry Performance dashboard > select any transaction > expand the span waterfall > look for Prisma spans with duration > 100ms.
Next.js (Frontend)
pnpm add @sentry/nextjs
Run the wizard — it generates the configuration files:
npx @sentry/wizard@latest -i nextjs
The wizard creates sentry.client.config.ts, sentry.server.config.ts, and sentry.edge.config.ts. Edit the client config to add performance tracing:
// sentry.client.config.ts
import * as Sentry from '@sentry/nextjs';
Sentry.init({
dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
environment: process.env.NODE_ENV ?? 'development',
release: process.env.SENTRY_RELEASE,
// Performance tracing
tracesSampleRate: process.env.NODE_ENV === 'production' ? 0.1 : 1.0,
// Core Web Vitals
// Sentry automatically captures CWV with the BrowserTracing integration
integrations: [
Sentry.browserTracingIntegration({
// Trace outbound requests to your API
tracePropagationTargets: [
'localhost',
/^https:\/\/api\.example\.com/,
],
}),
Sentry.replayIntegration({
// Session replay — 10% of sessions, 100% of sessions with errors
maskAllText: true, // GDPR: mask text by default
blockAllMedia: true, // block images in replay
}),
],
replaysSessionSampleRate: 0.1,
replaysOnErrorSampleRate: 1.0,
});
The tracePropagationTargets list determines which outbound requests get Sentry trace headers attached — enabling distributed tracing between the Next.js frontend and NestJS backend. This is the equivalent of Application Insights’ automatic correlation via Request-Id and traceparent headers.
Interpreting the Sentry Performance Dashboard
After a few hours of production traffic, navigate to your Sentry project > Performance.
Transactions view:
Sentry groups requests by route pattern (e.g., GET /api/products/:id, POST /api/orders). For each transaction you see:
- P50 / P75 / P95 / P99 response times (equivalent to Application Insights’ percentile charts)
- Throughput (requests per minute)
- Failure rate (5xx responses as percentage)
Sort by P95 descending to find your slowest endpoints. A P95 of 2000ms means 1 in 20 requests takes over 2 seconds — that is where to start.
Trace Explorer (transaction detail): Click any transaction to see the span waterfall. This is the equivalent of Application Insights’ end-to-end transaction detail:
GET /api/orders/:id 342ms
├── middleware: auth 12ms
├── OrdersService.findOne 285ms
│ ├── prisma:query SELECT orders WHERE id=... 240ms ← slow
│ └── prisma:query SELECT items WHERE order... 30ms
└── serialize response 8ms
The 240ms Prisma query is the bottleneck. Click it to see the model name, operation, and (if you enabled sendDefaultPii) the full query.
Issues vs. Performance: Sentry has two main sections that you will use daily:
- Issues: Error tracking — grouped by stack trace fingerprint, equivalent to Application Insights’ Failures blade
- Performance: APM — transactions, spans, Core Web Vitals, equivalent to Application Insights’ Performance blade
Core Web Vitals Monitoring
Sentry’s browserTracingIntegration automatically captures Core Web Vitals from real user sessions:
| Metric | What It Measures | Good Threshold |
|---|---|---|
| LCP (Largest Contentful Paint) | Load performance — when the largest visible element renders | < 2.5s |
| INP (Interaction to Next Paint) | Responsiveness — time from user interaction to visual update | < 200ms |
| CLS (Cumulative Layout Shift) | Visual stability — how much the layout shifts during load | < 0.1 |
| FCP (First Contentful Paint) | Time to first visible content | < 1.8s |
| TTFB (Time to First Byte) | Server response time | < 800ms |
In Sentry: Performance > Web Vitals tab. This shows the distribution of each metric across real user sessions segmented by page route. Poor LCP on /products but not /dashboard tells you the product page has a specific performance problem (large images, slow data fetch, render-blocking scripts).
Performance Budgets and Alerting
Setting Up Sentry Alerts
In Sentry: Alerts > Create Alert > Performance.
Useful alert configurations for a production API:
Alert: P95 response time > 2000ms
Transaction: GET /api/*
Environment: production
Trigger: when P95 > 2,000ms for 5 minutes
Action: notify Slack #alerts channel
Alert: Error rate > 5%
Environment: production
Trigger: when error rate > 5% for 2 minutes
Action: notify Slack #alerts channel + page on-call
Alert: Apdex score < 0.8
(Apdex measures user satisfaction; 1.0 = all requests within threshold)
Trigger: when Apdex < 0.8 for 10 minutes
Action: notify Slack #alerts
Performance Budgets
Define acceptable thresholds and fail CI when they are violated using the Sentry CLI and performance budget assertions. A simpler approach is Lighthouse CI for frontend budgets:
# Install Lighthouse CI
pnpm add -D @lhci/cli
// lighthouserc.json
{
"ci": {
"collect": {
"url": ["http://localhost:3000", "http://localhost:3000/products"],
"numberOfRuns": 3
},
"assert": {
"assertions": {
"categories:performance": ["warn", { "minScore": 0.8 }],
"first-contentful-paint": ["warn", { "maxNumericValue": 2000 }],
"largest-contentful-paint": ["error", { "maxNumericValue": 4000 }],
"cumulative-layout-shift": ["error", { "maxNumericValue": 0.1 }],
"total-blocking-time": ["warn", { "maxNumericValue": 600 }]
}
}
}
}
# .github/workflows/ci.yml (excerpt)
- name: Build Next.js
run: pnpm build
- name: Start Next.js
run: pnpm start &
env:
PORT: 3000
- name: Run Lighthouse CI
run: pnpm lhci autorun
Identifying Slow Endpoints and DB Queries
A practical workflow for finding and fixing a performance problem:
Step 1: Open Sentry Performance, sort transactions by P95 descending. Find the endpoint with the worst P95. Check whether it is consistently slow (all-day problem) or spiky (load-related or a specific query plan regression).
Step 2: Click through to the transaction detail. Examine the span waterfall. Look for:
- Prisma spans with duration > 100ms
- N+1 patterns: repeated identical queries (many short spans of the same Prisma query)
- Missing spans: time unaccounted for between spans (usually CPU-bound work)
Step 3: Identify the query from the Prisma span.
The span name shows the model and operation (prisma:query products findMany). In your application code, find the prisma.product.findMany() call. Check whether it is missing an index, returning more columns than needed, or triggering N+1 fetches.
N+1 example — the most common Prisma performance problem:
// BAD — N+1: one query for orders, N queries for each order's user
const orders = await prisma.order.findMany();
const ordersWithUser = await Promise.all(
orders.map(async (order) => ({
...order,
user: await prisma.user.findUnique({ where: { id: order.userId } }),
}))
);
// GOOD — single query with join
const orders = await prisma.order.findMany({
include: { user: true }, // Prisma generates a JOIN
});
Step 4: Add a database index if missing. A missing index on a foreign key or filter column is the single most common cause of slow queries:
// schema.prisma — add index for common filter patterns
model Order {
id String @id @default(cuid())
userId String
status String
createdAt DateTime @default(now())
user User @relation(fields: [userId], references: [id])
@@index([userId]) // filter orders by user
@@index([status, createdAt]) // filter by status, sort by date
}
After adding the index and running prisma migrate dev, deploy and monitor Sentry. The Prisma span duration for that endpoint should drop.
Key Differences from Application Insights
| Application Insights | Sentry |
|---|---|
| Tightly coupled to Azure (RBAC, Log Analytics, Monitor) | Cloud-agnostic |
| KQL for querying telemetry | UI-driven search + Sentry Query Language (SQL-like) |
| Smart detection of anomalies (built-in ML) | Alert rules based on thresholds |
| Application Map (visual service graph) | Trace Explorer (per-transaction waterfall) |
| Live Metrics Stream (real-time telemetry) | Real-time event stream in Issues view |
| Source maps uploaded via CLI or CI plugin | Source maps uploaded via Sentry webpack/turbo plugin |
| Session analytics from App Insights JS SDK | Session Replay (video-like replay of user sessions) |
| Adaptive sampling via AI | Configurable tracesSampleRate + tracesSampler function |
| Performance blades in Azure Portal | Unified Performance tab in Sentry |
The practical difference for daily use: Application Insights requires familiarity with KQL to answer non-obvious questions. Sentry’s UI surfaces the most important information (slowest endpoints, worst error rates, failing releases) without writing queries. For ad-hoc investigation you will occasionally miss KQL, but for day-to-day monitoring Sentry’s defaults are faster.
Gotchas for .NET Engineers
1. Sentry.init() must be the very first code that runs — imports included.
In Node.js, import statements are hoisted and executed before any runtime code in the file. If you import NestJS or Prisma before calling Sentry.init(), Sentry cannot instrument those modules. The @sentry/node package uses module patching — it wraps the exported functions of pg, ioredis, and other modules to inject span creation. If Prisma is already imported when Sentry initializes, the patching has no effect and you see no database spans. Structure main.ts so that Sentry.init() precedes all other imports, or use a separate instrument.ts entry point:
// instrument.ts — initialize Sentry, nothing else
import * as Sentry from '@sentry/node';
Sentry.init({
dsn: process.env.SENTRY_DSN,
tracesSampleRate: 0.1,
});
// package.json — load instrument.ts before the app
{
"scripts": {
"start": "node --require ./dist/instrument.js dist/main.js"
}
}
2. tracesSampleRate: 1.0 in production will saturate your Sentry quota and degrade performance.
Application Insights’ adaptive sampling adjusts automatically. Sentry’s tracesSampleRate is a fixed fraction. At 1.0, every request is traced. For a service handling 1000 req/min, that is 60,000 traces per hour. Most Sentry plans have monthly event limits — you can exhaust the quota in hours and lose all tracing for the rest of the billing period. Start with 0.1 (10%) in production and adjust based on your volume and Sentry plan limits.
3. Distributed tracing between Next.js and NestJS requires CORS to allow Sentry trace headers.
Sentry injects sentry-trace and baggage headers into outbound fetch calls from the browser. Your NestJS CORS configuration must allowlist these headers:
app.enableCors({
allowedHeaders: ['Content-Type', 'Authorization', 'sentry-trace', 'baggage'],
// ... other options
});
Without this, the browser’s preflight OPTIONS request succeeds but the actual request fails CORS validation because sentry-trace is not in the allowlist. The result: you see frontend transactions in Sentry but they do not link to backend transactions — the distributed trace chain is broken.
4. Source maps must be uploaded to Sentry or stack traces in production will be minified and unreadable.
Application Insights stack traces are readable because .NET PDBs are deployed with the application. JavaScript production builds are minified — product.service.ts:42 becomes t.js:1:18432. Without uploaded source maps, Sentry error stack traces are useless for debugging. Upload source maps as part of your CI/CD pipeline:
# Install Sentry CLI
pnpm add -D @sentry/cli
# Upload source maps after build
sentry-cli releases files "$SENTRY_RELEASE" upload-sourcemaps ./dist \
--url-prefix '~/' \
--rewrite
Or use the @sentry/webpack-plugin / Next.js plugin which handles this automatically.
5. Sentry’s replaysIntegration can capture sensitive user data if not configured carefully.
The Session Replay feature records real user interactions. With default settings, it captures all text content and form inputs. Enable maskAllText: true and blockAllMedia: true in replaysIntegration to comply with GDPR and prevent credential leakage in replays. Add data-sentry-mask attributes to specific elements that contain PII, or data-sentry-unmask to safe elements you want to keep readable.
Hands-On Exercise
Instrument a NestJS + Next.js application with Sentry end-to-end performance tracing.
Step 1: Create a Sentry project (or use an existing one) and obtain the DSN. Add SENTRY_DSN to your .env.local.
Step 2: Install @sentry/node in the NestJS app. Add the Sentry.init() call as the very first code in main.ts. Set tracesSampleRate: 1.0 for local development.
Step 3: Add Sentry.prismaIntegration() to the integrations array. Make a few API calls through the NestJS API (use Thunder Client or httpie).
Step 4: Open Sentry > Performance > Transactions. Confirm your NestJS transactions appear. Click one transaction and examine the span waterfall. Confirm Prisma spans are visible with query durations.
Step 5: Add a SentryExceptionFilter to NestJS. Trigger an intentional error (add a route that throws new Error('test error')). Confirm the error appears in Sentry Issues with a readable stack trace.
Step 6: Install @sentry/nextjs in the Next.js app and run the wizard. Add sentry-trace and baggage to the NestJS CORS allowedHeaders. Make a fetch call from the Next.js frontend to the NestJS backend. In Sentry, find the frontend transaction and verify it links to the corresponding backend transaction (the distributed trace).
Step 7: Set up one Sentry alert: P95 > 1000ms for any transaction in the development environment. Trigger it by adding await new Promise(r => setTimeout(r, 1100)) to a route and calling it. Verify the alert fires within the configured window.
Quick Reference
# Sentry CLI
sentry-cli login
sentry-cli releases new "$VERSION"
sentry-cli releases files "$VERSION" upload-sourcemaps ./dist
sentry-cli releases finalize "$VERSION"
sentry-cli releases deploys "$VERSION" new -e production
# Test that Sentry is receiving events
npx @sentry/wizard@latest -i nextjs # Next.js setup wizard
// Minimal NestJS Sentry setup (main.ts)
import * as Sentry from '@sentry/node';
Sentry.init({
dsn: process.env.SENTRY_DSN,
environment: process.env.NODE_ENV,
tracesSampleRate: process.env.NODE_ENV === 'production' ? 0.1 : 1.0,
integrations: [Sentry.prismaIntegration()],
});
// Custom span
const result = await Sentry.startSpan(
{ name: 'my-operation', op: 'task' },
async () => {
return await doWork();
}
);
// Capture error manually
Sentry.captureException(error);
// Add context to current scope
Sentry.setUser({ id: userId, email: userEmail });
Sentry.setTag('feature_flag', 'experiment-a');
Sentry.addBreadcrumb({
message: 'User clicked checkout',
category: 'ui',
level: 'info',
});
// CORS allowedHeaders must include Sentry trace headers
app.enableCors({
allowedHeaders: [
'Content-Type',
'Authorization',
'sentry-trace', // required for distributed tracing
'baggage', // required for distributed tracing
],
credentials: true,
});
Further Reading
- Sentry NestJS SDK documentation
- Sentry Next.js SDK documentation
- Sentry Performance Monitoring documentation
- Sentry Session Replay documentation
- Sentry distributed tracing
- Core Web Vitals — web.dev
- Lighthouse CI documentation
- Application Insights to Sentry migration guide — Sentry’s .NET SDK documentation, useful for mental model comparison