Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

4.8 — File Uploads & Storage: Azure Blob to S3/Cloudflare R2

For .NET engineers who know: IFormFile, Azure.Storage.Blobs, streaming uploads to Azure Blob Storage You’ll learn: How to handle file uploads in NestJS with Multer, why pre-signed URLs are the preferred architecture, and how to use Cloudflare R2 as an S3-compatible alternative Time: 10-15 minutes


The .NET Way (What You Already Know)

In ASP.NET Core, file uploads arrive through IFormFile. You bind it in a controller action, validate the content type and size, and push the stream to Azure Blob Storage using the Azure.Storage.Blobs SDK:

// C# — ASP.NET Core file upload to Azure Blob Storage
[ApiController]
[Route("api/uploads")]
public class UploadsController : ControllerBase
{
    private readonly BlobServiceClient _blobServiceClient;

    public UploadsController(BlobServiceClient blobServiceClient)
    {
        _blobServiceClient = blobServiceClient;
    }

    [HttpPost]
    [RequestSizeLimit(10_485_760)] // 10 MB
    public async Task<IActionResult> Upload(IFormFile file, CancellationToken ct)
    {
        if (file.Length == 0)
            return BadRequest("Empty file");

        var allowed = new[] { "image/jpeg", "image/png", "image/webp" };
        if (!allowed.Contains(file.ContentType))
            return BadRequest("Unsupported file type");

        var container = _blobServiceClient.GetBlobContainerClient("uploads");
        var blobName = $"{Guid.NewGuid()}{Path.GetExtension(file.FileName)}";
        var blob = container.GetBlobClient(blobName);

        await blob.UploadAsync(file.OpenReadStream(), new BlobHttpHeaders
        {
            ContentType = file.ContentType
        }, cancellationToken: ct);

        return Ok(new { url = blob.Uri });
    }
}

This pattern — browser sends file to your API, API streams it to blob storage — is straightforward but has a scaling problem: every upload consumes a thread and network bandwidth on your API server. For high-volume uploads, you switch to SAS tokens (Shared Access Signatures) so clients upload directly to Blob Storage, bypassing your server entirely.

// C# — Generate a SAS URL for direct client upload
[HttpPost("presign")]
public IActionResult GenerateSasUrl([FromBody] PresignRequest request)
{
    var container = _blobServiceClient.GetBlobContainerClient("uploads");
    var blobName = $"{Guid.NewGuid()}{request.Extension}";
    var blob = container.GetBlobClient(blobName);

    var sasUri = blob.GenerateSasUri(BlobSasPermissions.Write, DateTimeOffset.UtcNow.AddMinutes(15));
    return Ok(new { uploadUrl = sasUri, blobName });
}

The Node.js ecosystem uses the same two patterns but with different libraries and terminology. “SAS tokens” become “pre-signed URLs,” and “Azure Blob Storage” is most commonly replaced by Amazon S3 or Cloudflare R2.


The Node.js Way

Multer: The IFormFile Equivalent

Multer is the standard Express/NestJS middleware for handling multipart/form-data — it is the Node.js equivalent of ASP.NET Core’s built-in IFormFile binding. NestJS ships with a FileInterceptor decorator that wraps Multer.

pnpm add multer
pnpm add -D @types/multer
// TypeScript — NestJS file upload with FileInterceptor
// src/uploads/uploads.controller.ts
import {
    Controller,
    Post,
    UploadedFile,
    UseInterceptors,
    BadRequestException,
    ParseFilePipe,
    MaxFileSizeValidator,
    FileTypeValidator,
} from "@nestjs/common";
import { FileInterceptor } from "@nestjs/platform-express";
import { Express } from "express";
import { UploadsService } from "./uploads.service";

@Controller("uploads")
export class UploadsController {
    constructor(private readonly uploadsService: UploadsService) {}

    @Post()
    @UseInterceptors(FileInterceptor("file"))
    async upload(
        @UploadedFile(
            new ParseFilePipe({
                validators: [
                    new MaxFileSizeValidator({ maxSize: 10 * 1024 * 1024 }), // 10 MB
                    new FileTypeValidator({ fileType: /image\/(jpeg|png|webp)/ }),
                ],
            }),
        )
        file: Express.Multer.File,
    ) {
        const url = await this.uploadsService.uploadToStorage(file);
        return { url };
    }
}

FileInterceptor("file") reads the field name from the multipart form — equivalent to the parameter name in IFormFile file. ParseFilePipe applies validators before your handler runs, similar to model validation with [Required] and custom [FileExtensions] attributes in C#.

By default, Multer buffers files in memory. For large files, configure disk storage:

// TypeScript — Multer disk storage configuration
import { diskStorage } from "multer";
import { extname } from "path";

const multerDiskConfig = {
    storage: diskStorage({
        destination: "/tmp/uploads",
        filename: (_req, file, callback) => {
            const uniqueName = `${Date.now()}-${Math.round(Math.random() * 1e9)}`;
            callback(null, `${uniqueName}${extname(file.originalname)}`);
        },
    }),
};

@UseInterceptors(FileInterceptor("file", multerDiskConfig))

For the NestJS module, register MulterModule to set global defaults:

// TypeScript — uploads.module.ts
import { Module } from "@nestjs/common";
import { MulterModule } from "@nestjs/platform-express";
import { UploadsController } from "./uploads.controller";
import { UploadsService } from "./uploads.service";

@Module({
    imports: [
        MulterModule.register({
            limits: {
                fileSize: 10 * 1024 * 1024, // 10 MB
                files: 5,
            },
        }),
    ],
    controllers: [UploadsController],
    providers: [UploadsService],
})
export class UploadsModule {}

Cloudflare R2: S3-Compatible Storage

Cloudflare R2 is the recommended object storage for this stack. It is S3-compatible (same API surface, same SDK) but has no egress fees and is significantly cheaper at scale. You use the official AWS SDK @aws-sdk/client-s3 — Cloudflare R2 accepts the same requests.

pnpm add @aws-sdk/client-s3 @aws-sdk/s3-request-presigner

Configure the S3 client to point at your R2 endpoint:

// TypeScript — R2 client configuration
// src/storage/r2.client.ts
import { S3Client } from "@aws-sdk/client-s3";

export function createR2Client(): S3Client {
    return new S3Client({
        region: "auto",
        endpoint: `https://${process.env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
        credentials: {
            accessKeyId: process.env.R2_ACCESS_KEY_ID!,
            secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
        },
    });
}

The service that performs the actual upload:

// TypeScript — uploads.service.ts (server-side streaming to R2)
import { Injectable } from "@nestjs/common";
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import { createR2Client } from "../storage/r2.client";
import { Express } from "express";
import { randomUUID } from "crypto";
import { extname } from "path";

@Injectable()
export class UploadsService {
    private readonly s3: S3Client = createR2Client();
    private readonly bucket = process.env.R2_BUCKET_NAME!;
    private readonly publicUrl = process.env.R2_PUBLIC_URL!; // e.g. https://cdn.example.com

    async uploadToStorage(file: Express.Multer.File): Promise<string> {
        const key = `uploads/${randomUUID()}${extname(file.originalname)}`;

        await this.s3.send(
            new PutObjectCommand({
                Bucket: this.bucket,
                Key: key,
                Body: file.buffer,
                ContentType: file.mimetype,
                ContentLength: file.size,
            }),
        );

        return `${this.publicUrl}/${key}`;
    }
}

Pre-Signed URLs: The Preferred Architecture

Routing large uploads through your API server is wasteful. The preferred pattern — equivalent to Azure SAS tokens — is pre-signed URLs: your server generates a short-lived signed URL, hands it to the client, and the client uploads directly to R2. Your API server never touches the file bytes.

sequenceDiagram
    participant B as Browser
    participant A as API Server
    participant R as Cloudflare R2

    B->>A: POST /uploads/presign { contentType, size }
    A->>R: generatePresignedUrl()
    R-->>A: { uploadUrl, key }
    A-->>B: { uploadUrl, key }
    B->>R: PUT uploadUrl (file bytes, directly to R2)
    R-->>B: 200 OK
    B->>A: POST /uploads/confirm { key }
    Note over A: record in DB, process...
    A-->>B: { finalUrl }
// TypeScript — Pre-signed URL generation
// src/uploads/uploads.service.ts (add to existing service)
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { PutObjectCommand } from "@aws-sdk/client-s3";

interface PresignRequest {
    contentType: string;
    sizeBytes: number;
    extension: string;
}

interface PresignResponse {
    uploadUrl: string;
    key: string;
    expiresAt: Date;
}

const ALLOWED_TYPES = new Set(["image/jpeg", "image/png", "image/webp", "application/pdf"]);
const MAX_SIZE_BYTES = 50 * 1024 * 1024; // 50 MB

@Injectable()
export class UploadsService {
    // ...existing code...

    async createPresignedUrl(request: PresignRequest): Promise<PresignResponse> {
        if (!ALLOWED_TYPES.has(request.contentType)) {
            throw new BadRequestException(`Content type ${request.contentType} is not allowed`);
        }

        if (request.sizeBytes > MAX_SIZE_BYTES) {
            throw new BadRequestException(
                `File size ${request.sizeBytes} exceeds maximum of ${MAX_SIZE_BYTES}`,
            );
        }

        const key = `uploads/${randomUUID()}.${request.extension.replace(/^\./, "")}`;
        const expiresIn = 900; // 15 minutes

        const uploadUrl = await getSignedUrl(
            this.s3,
            new PutObjectCommand({
                Bucket: this.bucket,
                Key: key,
                ContentType: request.contentType,
                ContentLength: request.sizeBytes,
            }),
            { expiresIn },
        );

        return {
            uploadUrl,
            key,
            expiresAt: new Date(Date.now() + expiresIn * 1000),
        };
    }
}
// TypeScript — Controller endpoint for pre-signed URL
import { Body, Post } from "@nestjs/common";

@Post("presign")
async presign(
    @Body() body: { contentType: string; sizeBytes: number; extension: string },
) {
    return this.uploadsService.createPresignedUrl(body);
}

@Post("confirm")
async confirm(@Body() body: { key: string }) {
    // Validate the object actually exists in R2, then save to DB
    return this.uploadsService.confirmUpload(body.key);
}

On the frontend, the client-side upload flow:

// TypeScript — Frontend upload using pre-signed URL
// src/lib/upload.ts
interface UploadResult {
    key: string;
    publicUrl: string;
}

export async function uploadFile(file: File): Promise<UploadResult> {
    // Step 1: Get pre-signed URL from your API
    const presignResponse = await fetch("/api/uploads/presign", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({
            contentType: file.type,
            sizeBytes: file.size,
            extension: file.name.split(".").pop() ?? "bin",
        }),
    });

    if (!presignResponse.ok) {
        throw new Error("Failed to get upload URL");
    }

    const { uploadUrl, key } = await presignResponse.json();

    // Step 2: PUT directly to R2 — your API server is not involved
    const uploadResponse = await fetch(uploadUrl, {
        method: "PUT",
        body: file,
        headers: { "Content-Type": file.type },
    });

    if (!uploadResponse.ok) {
        throw new Error("Upload to storage failed");
    }

    // Step 3: Confirm with your API so it can record the upload
    const confirmResponse = await fetch("/api/uploads/confirm", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ key }),
    });

    return confirmResponse.json();
}

Image Processing with Sharp

Sharp is the standard Node.js library for image processing — resizing, format conversion, compression. The C# equivalent is ImageSharp (from SixLabors) or the older System.Drawing. Sharp wraps libvips, which is substantially faster than the GDI+ backend that System.Drawing uses.

pnpm add sharp
pnpm add -D @types/sharp
// TypeScript — Image processing before upload to R2
import sharp from "sharp";

async function processAndUpload(
    file: Express.Multer.File,
    variants: Array<{ width: number; suffix: string }>,
): Promise<Record<string, string>> {
    const results: Record<string, string> = {};
    const baseKey = `images/${randomUUID()}`;

    for (const variant of variants) {
        // Resize and convert to WebP — generally 30-50% smaller than JPEG
        const processed = await sharp(file.buffer)
            .resize(variant.width, null, {
                withoutEnlargement: true,
                fit: "inside",
            })
            .webp({ quality: 85 })
            .toBuffer();

        const key = `${baseKey}-${variant.suffix}.webp`;

        await this.s3.send(
            new PutObjectCommand({
                Bucket: this.bucket,
                Key: key,
                Body: processed,
                ContentType: "image/webp",
            }),
        );

        results[variant.suffix] = `${this.publicUrl}/${key}`;
    }

    return results;
}

// Usage — generate thumbnail, medium, and full-size variants
const urls = await processAndUpload(file, [
    { width: 200, suffix: "thumb" },
    { width: 800, suffix: "medium" },
    { width: 1920, suffix: "full" },
]);

Key Differences

ConceptC# / AzureNode.js / Cloudflare R2
Multipart form bindingIFormFile built into ASP.NETMulter middleware (FileInterceptor in NestJS)
File validation[FileExtensions], [MaxFileSize] attributesParseFilePipe with MaxFileSizeValidator, FileTypeValidator
Blob storage SDKAzure.Storage.Blobs (Azure-specific)@aws-sdk/client-s3 (works with R2, S3, MinIO, etc.)
Direct client uploadAzure SAS tokensS3 pre-signed URLs — same concept, different name
Pre-signed URL libBlobClient.GenerateSasUri()@aws-sdk/s3-request-presigner getSignedUrl()
Image processingSixLabors.ImageSharpsharp (wraps libvips, very fast)
Storage cost modelPay for egressR2 has no egress fees — significant savings at scale
Content validationMIME type from IFormFile.ContentTypefile.mimetype from Multer — also check magic bytes for security
Streaming large filesOpenReadStream() on IFormFileMulter memoryStorage (buffer) or diskStorage (temp file)

Gotchas for .NET Engineers

Gotcha 1: MIME Type Validation is Not Enough — Check Magic Bytes

FileTypeValidator in NestJS and the ContentType field in IFormFile both rely on what the client reports. A malicious client can rename exploit.exe to photo.jpg and send image/jpeg as the content type. The only reliable validation is checking the file’s magic bytes — the first few bytes that identify the actual format.

Use the file-type package for this:

pnpm add file-type
// TypeScript — magic byte validation in the upload service
import { fromBuffer } from "file-type";

async function validateMagicBytes(buffer: Buffer, declaredMimeType: string): Promise<void> {
    const detected = await fromBuffer(buffer);

    if (!detected) {
        throw new BadRequestException("Could not determine file type from content");
    }

    const allowed = new Set(["image/jpeg", "image/png", "image/webp"]);

    if (!allowed.has(detected.mime)) {
        throw new BadRequestException(`File content is ${detected.mime}, not an allowed image type`);
    }

    if (detected.mime !== declaredMimeType) {
        throw new BadRequestException(
            `Declared type ${declaredMimeType} does not match actual content ${detected.mime}`,
        );
    }
}

In .NET, the System.Net.Mime.ContentType class parses MIME types but does not inspect content. The same vulnerability exists. This is not a Node.js-specific problem, but it is frequently forgotten in both ecosystems.

Gotcha 2: Multer Memory Storage Will OOM Your Server on Large Files

The default Multer configuration in NestJS uses memoryStorage, which buffers the entire file in memory before your handler runs. If five users simultaneously upload 50 MB files, you have 250 MB of file data sitting in RAM — before your application memory. Under concurrent load, this causes out-of-memory crashes.

For files larger than about 5 MB, use disk storage or, better yet, the pre-signed URL pattern so files never hit your server at all:

// TypeScript — always set explicit memory limits for in-memory uploads
MulterModule.register({
    storage: memoryStorage(),
    limits: {
        fileSize: 5 * 1024 * 1024, // Enforce a hard limit — 5 MB for in-memory
        files: 1,
    },
}),

If you need server-side processing of larger files (Sharp pipelines, virus scanning), use disk storage and stream the file from disk rather than loading it into memory.

Gotcha 3: Pre-Signed URLs Require CORS Configuration on Your Bucket

When a browser makes a PUT request directly to Cloudflare R2 using a pre-signed URL, the browser sends a CORS preflight request (OPTIONS). If your R2 bucket does not have a CORS policy allowing PUT from your domain, every upload will fail with a CORS error in the browser — and the error will look like a network failure, not a configuration problem.

Configure CORS on your R2 bucket via the Cloudflare dashboard or Wrangler:

// R2 CORS policy (JSON format in Cloudflare dashboard)
[
  {
    "AllowedOrigins": ["https://app.example.com"],
    "AllowedMethods": ["PUT", "GET", "HEAD"],
    "AllowedHeaders": ["Content-Type", "Content-Length"],
    "ExposeHeaders": ["ETag"],
    "MaxAgeSeconds": 3600
  }
]

In Azure, the equivalent is the CORS configuration on the Storage Account. This is easy to overlook in development (where you may use wildcard origins) and painful to debug in production.

Gotcha 4: Sharp is a Native Module — CI and Docker Require Architecture Matching

Sharp bundles precompiled libvips binaries for specific CPU architectures and Node.js versions. If you build your Docker image on an Apple Silicon Mac (arm64) and deploy to a Linux amd64 server, Sharp will fail at startup with a cryptic binary compatibility error.

Always build Docker images for the target architecture:

# Dockerfile — specify platform explicitly
FROM --platform=linux/amd64 node:22-alpine AS base

Or in your build command:

docker build --platform linux/amd64 -t my-app .

In CI (GitHub Actions), always specify runs-on: ubuntu-latest for build jobs that include Sharp. This is not a problem with AWS Lambda or Cloudflare Workers since they have defined architectures, but it is a common local-to-cloud gap.

Gotcha 5: The Confirm Step Is Not Optional

With pre-signed URL uploads, there is a window between “client received a presigned URL” and “upload completed.” If you skip the /confirm endpoint and instead assume the upload happened because you generated the URL, you will have dangling references in your database pointing to keys that were never uploaded — either because the upload failed, or because the user closed the tab, or because a malicious client never uploaded at all.

The confirm step should:

  1. Verify the object actually exists in R2 (HeadObjectCommand)
  2. Validate that the key matches what was issued in the presign step (prevent key substitution)
  3. Record the upload in your database and return the canonical public URL

Hands-On Exercise

Build a profile picture upload flow using pre-signed URLs and Sharp for image processing. This exercise covers the full lifecycle: presign, upload, confirm, and serve.

Prerequisites: A running NestJS project with a UsersModule and Cloudflare R2 credentials in your .env.

Step 1 — Environment setup

Add to .env:

R2_ACCOUNT_ID=your_account_id
R2_ACCESS_KEY_ID=your_access_key
R2_SECRET_ACCESS_KEY=your_secret_key
R2_BUCKET_NAME=your-bucket
R2_PUBLIC_URL=https://cdn.example.com

Step 2 — Create the UploadsModule

// src/uploads/uploads.module.ts
import { Module } from "@nestjs/common";
import { UploadsController } from "./uploads.controller";
import { UploadsService } from "./uploads.service";

@Module({
    controllers: [UploadsController],
    providers: [UploadsService],
    exports: [UploadsService],
})
export class UploadsModule {}

Step 3 — Implement the full service

Implement UploadsService with three methods:

  • createPresignedUrl(contentType, sizeBytes, extension) — validates input, returns a signed PUT URL
  • processAvatarUpload(key) — downloads the raw upload from R2, runs it through Sharp (200x200 WebP thumbnail), re-uploads the processed version, returns the public URL
  • confirmUpload(rawKey) — verifies the object exists, triggers processAvatarUpload, returns the final URL

Step 4 — Add magic byte validation

In createPresignedUrl, add magic byte validation as shown in the Gotchas section. Since the client only sends metadata at this point (not the file), add the magic byte check in the confirm step instead — download the first 16 bytes of the raw upload using GetObjectCommand with a Range: bytes=0-15 header.

Step 5 — Wire up the confirm endpoint

The confirm endpoint should call confirmUpload, get the processed URL, and store it in the user’s profile record via the UsersService.

Step 6 — Test the full flow

# Step 1: Get presigned URL
curl -X POST http://localhost:3000/uploads/presign \
  -H "Content-Type: application/json" \
  -d '{"contentType":"image/jpeg","sizeBytes":204800,"extension":"jpg"}'

# Step 2: Upload directly to R2 using the URL from step 1
curl -X PUT "PRESIGNED_URL_HERE" \
  -H "Content-Type: image/jpeg" \
  --data-binary @test-photo.jpg

# Step 3: Confirm the upload
curl -X POST http://localhost:3000/uploads/confirm \
  -H "Content-Type: application/json" \
  -d '{"key":"uploads/abc123.jpg"}'

Verify you get back a processed WebP URL pointing to a 200x200 thumbnail.


Quick Reference

.NET / Azure ConceptNode.js / R2 EquivalentNotes
IFormFileExpress.Multer.FileMulter provides the same metadata
IFormFile.ContentTypefile.mimetypeDon’t trust either — validate magic bytes
IFormFile.Lengthfile.sizeSame semantics
IFormFile.OpenReadStream()file.buffer (memoryStorage) or file.path (diskStorage)Choose based on file size
[RequestSizeLimit] attributeMulterModule limits.fileSizeSet at module level or interceptor
Azure.Storage.Blobs.BlobClient@aws-sdk/client-s3 S3ClientR2 uses the same S3 SDK
BlobClient.UploadAsync(stream)PutObjectCommandPass Body: buffer or a Node.js Readable
BlobClient.GenerateSasUri()getSignedUrl() from @aws-sdk/s3-request-presignerSame 15-minute expiry is common
Azure CORS on Storage AccountR2 CORS policy in Cloudflare dashboardRequired for direct browser uploads
SixLabors.ImageSharpsharpSharp wraps libvips — fast, native binary
image.Resize(width, height)sharp(buffer).resize(width, height).toBuffer()Sharp supports fit modes, aspect ratio preservation
image.SaveAsWebpAsync().webp({ quality: 85 })WebP is the default modern format
Azure CDN serving blobsR2 custom domain / Cloudflare CDNR2 has zero egress fees to Cloudflare CDN

Common Multer storage choices:

ScenarioStorageReason
Files < 5 MB, processed in-memorymemoryStorage()Simplest; no temp file cleanup
Files > 5 MB, server-side processingdiskStorage()Avoids RAM pressure
Files any size, no server processingPre-signed URLPreferred; API never touches bytes
Virus scanning requireddiskStorage() + scanMust land on disk for most scanners

Further Reading