Docker for Node.js: From .NET Images to Node.js Images
For .NET engineers who know: Docker for .NET (multi-stage builds,
mcr.microsoft.com/dotnetimages, Docker Compose) You’ll learn: How Node.js Dockerfiles differ from .NET Dockerfiles, how to optimize image size and layer caching, and how to set up a production-ready local dev environment with Docker Compose Time: 15-20 minutes
The .NET Way (What You Already Know)
A typical .NET multi-stage Dockerfile:
# Build stage
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /app
COPY *.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c Release -o /out
# Runtime stage
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS runtime
WORKDIR /app
COPY --from=build /out .
EXPOSE 8080
ENTRYPOINT ["dotnet", "MyApp.dll"]
The pattern is:
- Build in the SDK image (large)
- Copy compiled output to a runtime image (small)
- The compiled output is self-contained — no source, no build tools in production
Node.js follows the same multi-stage principle but with key differences: the build step compiles TypeScript, and the production stage needs node_modules (runtime dependencies), not just compiled output.
The Node.js Way
Image Choices
For .NET, the choice is simple: SDK for build, aspnet for runtime. Node.js has more options:
| Base Image | Size (approx.) | Use Case |
|---|---|---|
node:20 | ~1.1 GB | Full Debian — development, debugging |
node:20-slim | ~230 MB | Debian with minimal packages — good default |
node:20-alpine | ~60 MB | Alpine Linux — smallest, but quirks exist |
cgr.dev/chainguard/node | ~50 MB | Distroless — minimal attack surface, no shell |
Alpine is popular for size, but it uses musl libc instead of glibc. Most npm packages work fine, but native addons (compiled C++ bindings) often need rebuilding or fail entirely. If you use packages like sharp, bcrypt, or anything with native code, test Alpine thoroughly before committing to it.
Slim is the pragmatic default: glibc compatibility, small enough, Debian package ecosystem available if needed.
Distroless (like Chainguard or gcr.io/distroless/nodejs20-debian12) has no shell, no package manager, no utilities — only Node.js. This is the gold standard for security: an attacker who gets code execution in the container has no tools to pivot with. The tradeoff is that docker exec debugging doesn’t work.
Multi-Stage Dockerfile for TypeScript
The key insight: Node.js needs node_modules at runtime (for require() to work). You can’t just copy compiled .js files like you copy a .NET DLL. The trick is installing only production dependencies in the final stage:
# ============================================================
# Stage 1: Install ALL dependencies (including devDependencies)
# ============================================================
FROM node:20-slim AS deps
WORKDIR /app
# Copy package files first — this layer is cached unless they change
COPY package.json package-lock.json ./
# ci = clean install, respects lockfile
RUN npm ci
# ============================================================
# Stage 2: Build (TypeScript -> JavaScript)
# ============================================================
FROM node:20-slim AS build
WORKDIR /app
# Copy node_modules from deps stage
COPY --from=deps /app/node_modules ./node_modules
# Copy source
COPY . .
# Compile TypeScript
RUN npm run build
# ============================================================
# Stage 3: Production runtime
# ============================================================
FROM node:20-slim AS production
WORKDIR /app
# Set production environment
ENV NODE_ENV=production
# Create non-root user (see Gotchas)
RUN groupadd --gid 1001 nodejs && \
useradd --uid 1001 --gid nodejs --shell /bin/bash --create-home nodejs
# Copy package files
COPY package.json package-lock.json ./
# Install ONLY production dependencies
RUN npm ci --omit=dev && npm cache clean --force
# Copy compiled output from build stage
COPY --from=build /app/dist ./dist
# Copy any other runtime assets (migrations, templates, etc.)
# COPY --from=build /app/prisma ./prisma
# Switch to non-root user
USER nodejs
# Expose the port your app listens on
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=30s --retries=3 \
CMD node -e "require('http').get('http://localhost:3000/health', r => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))"
# Start the app
CMD ["node", "dist/main.js"]
pnpm Variant
If your project uses pnpm (which we prefer in monorepos):
FROM node:20-slim AS base
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
FROM base AS deps
WORKDIR /app
COPY package.json pnpm-lock.yaml ./
RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --frozen-lockfile
FROM base AS build
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN pnpm run build
FROM base AS production
WORKDIR /app
ENV NODE_ENV=production
RUN groupadd --gid 1001 nodejs && \
useradd --uid 1001 --gid nodejs --shell /bin/bash --create-home nodejs
COPY package.json pnpm-lock.yaml ./
RUN --mount=type=cache,id=pnpm,target=/pnpm/store pnpm install --prod --frozen-lockfile
COPY --from=build /app/dist ./dist
USER nodejs
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=10s --start-period=30s --retries=3 \
CMD node -e "require('http').get('http://localhost:3000/health', r => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))"
CMD ["node", "dist/main.js"]
The --mount=type=cache flag (BuildKit feature) caches the pnpm store between builds without copying it into the image — equivalent to Docker layer caching but more granular.
.dockerignore
Like .gitignore but for Docker’s build context. Without it, COPY . . sends everything — including node_modules (often 500MB+) — to the Docker daemon:
# Dependencies (always installed fresh in the image)
node_modules/
**/node_modules/
# Build outputs
dist/
build/
.next/
.turbo/
# Environment files
.env
.env.*
!.env.example
# Editor
.vscode/
.idea/
*.swp
# Git
.git/
.gitignore
# Test files (not needed in production image)
**/*.test.ts
**/*.spec.ts
**/__tests__/
coverage/
# Docker
Dockerfile
docker-compose*.yml
.dockerignore
# CI
.github/
# Docs
*.md
docs/
# OS
.DS_Store
Thumbs.db
Without .dockerignore, a project with a 300MB node_modules sends all of that to the build context on every docker build. The build still works but it’s slow and wastes network IO.
Layer Caching for node_modules
The most important optimization in a Node.js Dockerfile is layer ordering. Docker caches layers — if a layer’s input hasn’t changed, it uses the cached result.
Wrong (busts cache on any file change):
COPY . . # Copies everything — any code change busts this layer
RUN npm ci # Reinstalls everything every time
Right (cache only busts when lockfile changes):
COPY package.json package-lock.json ./ # Only these two files
RUN npm ci # Cached unless lockfile changes
COPY . . # Code changes don't affect npm ci cache
RUN npm run build
This is the same principle as the .NET pattern of copying .csproj first and running dotnet restore before copying source. The package manifest changes rarely; source code changes constantly.
Image Size Comparison
Build a Node.js API and compare:
# node:20 (full Debian)
docker build --target production -t api:full \
--build-arg BASE=node:20 .
docker image inspect api:full --format='{{.Size}}' | numfmt --to=iec
# node:20-slim
docker build --target production -t api:slim \
--build-arg BASE=node:20-slim .
# node:20-alpine
docker build --target production -t api:alpine \
--build-arg BASE=node:20-alpine .
Typical results for a NestJS API:
node:20base: ~1.4 GBnode:20-slimbase: ~350 MBnode:20-alpinebase: ~130 MB
The difference matters for:
- Registry storage costs
- Pull time in CI/CD (pulling 1.4 GB vs 130 MB is 10x different)
- Attack surface (fewer packages = fewer vulnerabilities)
Running as Non-Root
By default, Node.js containers run as root. If someone exploits your app and gets shell access, they’re root inside the container — which can mean writing to mounted volumes, reading secrets from env, or attempting container escapes.
The fix is two lines:
RUN groupadd --gid 1001 nodejs && \
useradd --uid 1001 --gid nodejs --shell /bin/bash --create-home nodejs
# ... copy files, install deps ...
USER nodejs
Ensure any directories your app writes to are owned by this user:
RUN mkdir -p /app/uploads && chown nodejs:nodejs /app/uploads
USER nodejs
For Alpine (which uses addgroup/adduser):
RUN addgroup --system --gid 1001 nodejs && \
adduser --system --uid 1001 nodejs
USER nodejs
Docker Compose for Local Development
Docker Compose replaces the Azure Local Emulator + SQL Server LocalDB + manually started services pattern. Define everything your app needs and start it with one command:
# docker-compose.yml
version: '3.9'
services:
api:
build:
context: .
target: deps # Use the deps stage, not production
volumes:
- ./src:/app/src # Mount source for hot-reload
- ./dist:/app/dist
command: npm run start:dev # ts-node or nodemon for hot reload
ports:
- "3000:3000"
- "9229:9229" # Node.js debugger port
environment:
NODE_ENV: development
DATABASE_URL: postgresql://postgres:password@postgres:5432/myapp
REDIS_URL: redis://redis:6379
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
postgres:
image: postgres:16-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: myapp
ports:
- "5432:5432" # Expose for local tools (DBeaver, TablePlus)
volumes:
- postgres_data:/var/lib/postgresql/data
- ./scripts/init.sql:/docker-entrypoint-initdb.d/init.sql # Seed script
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
ports:
- "6379:6379" # Expose for local inspection (Redis Insight)
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
# Optional: PgAdmin for database management (replaces SSMS)
pgadmin:
image: dpage/pgadmin4:latest
environment:
PGADMIN_DEFAULT_EMAIL: admin@local.dev
PGADMIN_DEFAULT_PASSWORD: admin
ports:
- "5050:80"
depends_on:
- postgres
profiles:
- tools # Only starts with: docker compose --profile tools up
volumes:
postgres_data:
redis_data:
Usage:
# Start all services
docker compose up -d
# Start with tools (pgadmin)
docker compose --profile tools up -d
# View logs
docker compose logs -f api
docker compose logs -f postgres
# Restart just the API (after code changes if not using hot-reload)
docker compose restart api
# Open a shell in the running container
docker compose exec api sh
# Run a one-off command (like migrations)
docker compose exec api npm run db:migrate
# Stop everything
docker compose down
# Stop and remove volumes (reset database)
docker compose down -v
Development vs Production Compose Files
Use override files for environment-specific config:
# docker-compose.override.yml (loaded automatically in dev)
services:
api:
build:
target: deps # Dev stage, not production
volumes:
- ./src:/app/src
command: npm run start:dev
environment:
NODE_ENV: development
LOG_LEVEL: debug
# docker-compose.prod.yml (explicit for production testing)
services:
api:
build:
target: production
environment:
NODE_ENV: production
# Development (uses docker-compose.yml + docker-compose.override.yml automatically)
docker compose up
# Production simulation
docker compose -f docker-compose.yml -f docker-compose.prod.yml up
When Docker vs Render Native Builds
Render can build your Node.js app natively (without Docker) using its buildpack — just set a build command and it handles Node.js setup automatically. When should you use Docker instead?
Use Render’s native build when:
- Standard Node.js app with no special system dependencies
- You want the simplest possible deploy setup
- Build time matters and you want to skip image build + push steps
Use Docker when:
- You need specific system libraries (ImageMagick, ffmpeg, specific glibc version)
- You want identical behavior between local dev and production
- Your app has non-standard startup requirements
- You’re deploying the same image to multiple environments (Render, other cloud, on-prem)
- You need to control exactly what’s in the production environment
For Render with Docker:
# render.yaml
services:
- type: web
name: my-api
runtime: docker # Use Dockerfile instead of buildpack
dockerfilePath: ./Dockerfile
dockerContext: .
healthCheckPath: /health
BuildKit Features
Enable BuildKit for faster builds and advanced features:
# Enable for a single build
DOCKER_BUILDKIT=1 docker build .
# Enable globally (add to ~/.profile or ~/.zshrc)
export DOCKER_BUILDKIT=1
# Or use the newer syntax
docker buildx build .
BuildKit enables:
--mount=type=cache— persistent build cache (pnpm store, apt cache)--mount=type=secret— pass secrets to build without baking them into layers- Parallel stage execution
- Better output with progress reporting
Passing secrets at build time (e.g., for private npm registry):
# In Dockerfile
RUN --mount=type=secret,id=npmrc,target=/root/.npmrc \
npm ci
# In build command
docker build --secret id=npmrc,src=.npmrc .
This is safer than ARG NPM_TOKEN which bakes the token into the image layer history.
Key Differences from .NET Docker
| Concern | .NET | Node.js |
|---|---|---|
| Build artifact | Self-contained binary or DLL set | Compiled JS + node_modules |
| Runtime image needs | Just the .NET runtime | Node.js runtime + production node_modules |
| Image base | mcr.microsoft.com/dotnet/aspnet | node:20-slim or Alpine |
| Non-root user | app user (sometimes pre-configured) | Must create manually |
| Hot reload dev | Volume mount + dotnet watch | Volume mount + nodemon or ts-node-dev |
| Native addons | Rarely an issue | Watch for musl/glibc conflicts on Alpine |
| Port | 8080 default | Any port, typically 3000/4000 |
| Build cache key | .csproj restore hash | package-lock.json or pnpm-lock.yaml |
Gotchas for .NET Engineers
Gotcha 1: node_modules inside the container conflicts with local node_modules.
When you volume-mount your project directory for hot-reload (-v ./src:/app/src), if you also mount the whole project (-v .:/app), the container’s node_modules gets replaced by your host machine’s node_modules. These may differ (different OS, architecture, native addon compilation). Fix: use named volume for node_modules:
volumes:
- .:/app
- node_modules:/app/node_modules # Named volume takes precedence
volumes:
node_modules:
Gotcha 2: Alpine native addon failures are silent until runtime.
npm install on Alpine may succeed even if a native addon falls back to a pure-JS polyfill. The failure appears at runtime, not at build time. The package bcryptjs vs bcrypt is a common example: bcrypt is faster (native), bcryptjs is pure JS and always works on Alpine. Test your exact dependencies on Alpine before committing to it.
Gotcha 3: Layer cache busts propagate forward.
If you change a layer, every subsequent layer is also rebuilt. This means the order of COPY and RUN statements matters enormously. Always copy only what each layer needs — COPY package.json package-lock.json ./ before RUN npm ci, not COPY . .. A common mistake is copying a README or tsconfig.json early and accidentally busting the npm install cache.
Gotcha 4: npm start vs node dist/main.js in production.
npm start spawns npm as a parent process which then spawns node. This means signals (like SIGTERM from Docker during graceful shutdown) go to npm, which may not properly forward them to Node.js. Always use CMD ["node", "dist/main.js"] directly in production, not CMD ["npm", "start"]. If you must use npm scripts, use exec in the script: "start": "exec node dist/main.js".
Gotcha 5: EXPOSE does not actually expose ports.
EXPOSE in a Dockerfile is documentation only — it does nothing at runtime. Ports are only actually published when you run docker run -p 3000:3000 or define ports: in Docker Compose. Do not confuse EXPOSE with port publishing. Include it anyway — tools like Docker Desktop and orchestrators use it for discovery.
Gotcha 6: Health check scripts must be in the image.
If your health check uses curl (HEALTHCHECK CMD curl -f http://localhost:3000/health), but your image is node:20-alpine which doesn’t include curl, the health check fails at startup with a confusing error. Use a Node.js inline script instead (shown in the Dockerfile above) — it’s always available if Node.js is.
Hands-On Exercise
Build and run a minimal NestJS API in Docker:
mkdir docker-practice && cd docker-practice
# Initialize a minimal Node.js + TypeScript project
npm init -y
npm install express
npm install -D typescript @types/express @types/node ts-node
cat > tsconfig.json << 'EOF'
{
"compilerOptions": {
"target": "ES2022",
"module": "commonjs",
"outDir": "dist",
"rootDir": "src",
"strict": true
}
}
EOF
mkdir src
cat > src/index.ts << 'EOF'
import express from 'express';
const app = express();
const port = parseInt(process.env.PORT ?? '3000', 10);
app.get('/health', (req, res) => {
res.json({ status: 'ok', pid: process.pid, node: process.version });
});
app.listen(port, () => {
console.log(`Server running on port ${port}`);
});
EOF
# Add build script to package.json
npm pkg set scripts.build="tsc"
npm pkg set scripts.start="node dist/index.js"
Now create the Dockerfile:
cat > Dockerfile << 'EOF'
FROM node:20-slim AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
FROM node:20-slim AS build
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM node:20-slim AS production
WORKDIR /app
ENV NODE_ENV=production
RUN groupadd --gid 1001 nodejs && \
useradd --uid 1001 --gid nodejs --shell /bin/bash --create-home nodejs
COPY package.json package-lock.json ./
RUN npm ci --omit=dev && npm cache clean --force
COPY --from=build /app/dist ./dist
USER nodejs
EXPOSE 3000
HEALTHCHECK --interval=15s --timeout=5s --start-period=10s --retries=3 \
CMD node -e "require('http').get('http://localhost:3000/health', r => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))"
CMD ["node", "dist/index.js"]
EOF
cat > .dockerignore << 'EOF'
node_modules/
dist/
.env
.git/
*.md
EOF
# Build the image
docker build --target production -t docker-practice:latest .
# Check the image size
docker image inspect docker-practice:latest --format='{{.Size}}' | numfmt --to=iec
# Run it
docker run -p 3000:3000 --name docker-practice -d docker-practice:latest
# Test the health endpoint
curl http://localhost:3000/health
# Check health status
docker inspect --format='{{.State.Health.Status}}' docker-practice
# Clean up
docker stop docker-practice && docker rm docker-practice
Try changing the base image to node:20-alpine and compare sizes.
Quick Reference
# Multi-stage Node.js Dockerfile skeleton
FROM node:20-slim AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
FROM node:20-slim AS build
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM node:20-slim AS production
WORKDIR /app
ENV NODE_ENV=production
RUN groupadd --gid 1001 nodejs && \
useradd --uid 1001 --gid nodejs --shell /bin/bash --create-home nodejs
COPY package.json package-lock.json ./
RUN npm ci --omit=dev && npm cache clean --force
COPY --from=build /app/dist ./dist
USER nodejs
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=10s --start-period=30s --retries=3 \
CMD node -e "require('http').get('http://localhost:3000/health', r => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))"
CMD ["node", "dist/main.js"]
# Build commands
docker build . # Build default target
docker build --target production . # Build specific stage
docker build --no-cache . # Ignore all cached layers
DOCKER_BUILDKIT=1 docker build . # Enable BuildKit
# Run commands
docker run -p 3000:3000 -d my-image # Run detached
docker run --rm -it my-image sh # Interactive shell
docker exec -it container-name sh # Shell in running container
# Docker Compose
docker compose up -d # Start all services
docker compose up -d --build # Rebuild images then start
docker compose logs -f api # Follow logs
docker compose exec api sh # Shell in service
docker compose down # Stop and remove containers
docker compose down -v # Also remove volumes
# Inspect
docker image ls # List images
docker image inspect my-image # Image details
docker container ls # Running containers
docker stats # Live resource usage