Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Background Jobs and Task Scheduling

For .NET engineers who know: IHostedService, BackgroundService, Hangfire queues and recurring jobs, and the Worker Service project template You’ll learn: How NestJS handles background processing with BullMQ (queue-based jobs) and @nestjs/schedule (cron), and why Node.js’s single-threaded nature changes the rules for CPU-intensive work Time: 15-20 min read


The .NET Way (What You Already Know)

.NET’s background processing ecosystem has three layers:

IHostedService / BackgroundService — long-running in-process services, started and stopped with the application lifecycle:

public class EmailWorker : BackgroundService
{
    private readonly IEmailQueue _queue;
    private readonly ILogger<EmailWorker> _logger;

    public EmailWorker(IEmailQueue queue, ILogger<EmailWorker> logger)
    {
        _queue = queue;
        _logger = logger;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        while (!stoppingToken.IsCancellationRequested)
        {
            var job = await _queue.DequeueAsync(stoppingToken);
            if (job != null)
            {
                await ProcessEmailAsync(job);
            }
        }
    }
}

Hangfire — a production-grade job queue with a Redis or SQL Server backend, retries, dashboards, and recurring jobs:

// Enqueue a fire-and-forget job
BackgroundJob.Enqueue(() => emailService.SendWelcomeEmail(userId));

// Schedule a delayed job
BackgroundJob.Schedule(() => invoiceService.SendReminder(invoiceId),
    TimeSpan.FromDays(3));

// Recurring job — cron expression
RecurringJob.AddOrUpdate("daily-report",
    () => reportService.GenerateDailyReport(),
    Cron.Daily(8)); // Every day at 08:00

// Continuation job — runs after another completes
var jobId = BackgroundJob.Enqueue(() => ProcessOrder(orderId));
BackgroundJob.ContinueJobWith(jobId, () => SendConfirmationEmail(orderId));

Worker Service — a separate process for CPU-heavy or independently deployable background work, typically talking to the queue via message broker.

.NET’s key advantage: BackgroundService runs in a thread pool. CPU-intensive work in a background thread does not block the main request-handling threads. The runtime manages this for you.


The NestJS Way

NestJS uses two libraries for background work:

  • BullMQ via @nestjs/bullmq — Redis-backed job queue. This is the Hangfire equivalent: fire-and-forget, delayed, scheduled, and retried jobs with a monitoring dashboard.
  • @nestjs/schedule — cron-based scheduling for recurring tasks. This is the RecurringJob.AddOrUpdate() equivalent, implemented with node-cron.

Installation

# BullMQ — queue-based background processing
npm install @nestjs/bullmq bullmq ioredis

# Schedule — cron jobs
npm install @nestjs/schedule

# Bull Board — monitoring dashboard (equivalent to Hangfire Dashboard)
npm install @bull-board/api @bull-board/express

Setting Up BullMQ

Register the queue module once, referencing your Redis connection:

// app.module.ts
import { Module } from '@nestjs/common';
import { BullModule } from '@nestjs/bullmq';
import { ScheduleModule } from '@nestjs/schedule';

@Module({
  imports: [
    // BullMQ — connect to Redis
    BullModule.forRoot({
      connection: {
        host: process.env.REDIS_HOST ?? 'localhost',
        port: Number(process.env.REDIS_PORT) || 6379,
        password: process.env.REDIS_PASSWORD,
      },
    }),

    // Scheduler — activates @Cron() decorators
    ScheduleModule.forRoot(),

    // Register individual queues
    BullModule.registerQueue({ name: 'email' }),
    BullModule.registerQueue({ name: 'reports' }),
    BullModule.registerQueue({ name: 'image-processing' }),
  ],
})
export class AppModule {}

Defining a Processor (the Worker)

A processor is a class decorated with @Processor() that handles jobs from a named queue. This is the equivalent of implementing Execute in a Hangfire job class, or the ExecuteAsync loop in a BackgroundService:

// email.processor.ts
import { Processor, WorkerHost, OnWorkerEvent } from '@nestjs/bullmq';
import { Job } from 'bullmq';
import { Logger } from '@nestjs/common';
import { EmailService } from './email.service';

// Job data shapes — define these explicitly, like Hangfire job arguments
export interface WelcomeEmailJobData {
  userId: string;
  email: string;
  firstName: string;
}

export interface InvoiceReminderJobData {
  invoiceId: string;
  customerId: string;
  daysOverdue: number;
}

// Union of all job types this processor handles
export type EmailJobData = WelcomeEmailJobData | InvoiceReminderJobData;

// @Processor('queue-name') — binds this class to the 'email' queue
@Processor('email')
export class EmailProcessor extends WorkerHost {
  private readonly logger = new Logger(EmailProcessor.name);

  constructor(private readonly emailService: EmailService) {
    super();
  }

  // process() is called for every job dequeued — equivalent to Execute() in Hangfire
  async process(job: Job<EmailJobData>): Promise<void> {
    this.logger.log(`Processing job ${job.id}, name: ${job.name}`);

    switch (job.name) {
      case 'welcome':
        await this.handleWelcomeEmail(job as Job<WelcomeEmailJobData>);
        break;
      case 'invoice-reminder':
        await this.handleInvoiceReminder(job as Job<InvoiceReminderJobData>);
        break;
      default:
        this.logger.warn(`Unknown job name: ${job.name}`);
    }
  }

  private async handleWelcomeEmail(job: Job<WelcomeEmailJobData>): Promise<void> {
    const { userId, email, firstName } = job.data;
    await this.emailService.sendWelcome({ userId, email, firstName });
  }

  private async handleInvoiceReminder(job: Job<InvoiceReminderJobData>): Promise<void> {
    const { invoiceId, customerId, daysOverdue } = job.data;
    await this.emailService.sendInvoiceReminder({ invoiceId, customerId, daysOverdue });
  }

  // Lifecycle events — equivalent to Hangfire's IServerFilter
  @OnWorkerEvent('completed')
  onCompleted(job: Job) {
    this.logger.log(`Job ${job.id} completed in ${job.processedOn! - job.timestamp}ms`);
  }

  @OnWorkerEvent('failed')
  onFailed(job: Job, error: Error) {
    this.logger.error(`Job ${job.id} failed: ${error.message}`, error.stack);
  }

  @OnWorkerEvent('stalled')
  onStalled(jobId: string) {
    this.logger.warn(`Job ${jobId} stalled — worker crashed during processing`);
  }
}

Enqueuing Jobs from a Service

// user.service.ts
import { Injectable } from '@nestjs/common';
import { InjectQueue } from '@nestjs/bullmq';
import { Queue } from 'bullmq';
import { WelcomeEmailJobData, InvoiceReminderJobData } from './email.processor';

@Injectable()
export class UserService {
  constructor(
    @InjectQueue('email') private readonly emailQueue: Queue,
  ) {}

  async registerUser(data: { email: string; firstName: string }): Promise<void> {
    // Enqueue a fire-and-forget job — equivalent to BackgroundJob.Enqueue(...)
    await this.emailQueue.add(
      'welcome',                        // Job name — used to route in processor
      { userId: 'generated-uuid', ...data } satisfies WelcomeEmailJobData,
      {
        attempts: 3,                    // Retry up to 3 times — Hangfire default is also retries
        backoff: {
          type: 'exponential',          // Wait 1s, 2s, 4s between retries
          delay: 1000,
        },
        removeOnComplete: { count: 100 }, // Keep last 100 completed jobs for debugging
        removeOnFail: { count: 50 },
      },
    );
  }

  async scheduleInvoiceReminder(invoiceId: string, customerId: string): Promise<void> {
    // Delayed job — equivalent to BackgroundJob.Schedule(..., TimeSpan.FromDays(3))
    await this.emailQueue.add(
      'invoice-reminder',
      { invoiceId, customerId, daysOverdue: 3 } satisfies InvoiceReminderJobData,
      {
        delay: 3 * 24 * 60 * 60 * 1000, // 3 days in milliseconds
        attempts: 5,
        backoff: { type: 'fixed', delay: 5 * 60 * 1000 }, // retry every 5 minutes
      },
    );
  }
}

Recurring Jobs with @nestjs/schedule

For cron-style recurring tasks, @nestjs/schedule is the tool. It is simpler than BullMQ — it runs in-process with no Redis dependency, no retry logic, and no persistence. Use it for lightweight recurring work (report generation, cache warming, cleanup). Use BullMQ for anything that needs reliability, retries, or visibility.

// report.scheduler.ts
import { Injectable, Logger } from '@nestjs/common';
import { Cron, CronExpression, Interval, Timeout } from '@nestjs/schedule';
import { ReportService } from './report.service';

@Injectable()
export class ReportScheduler {
  private readonly logger = new Logger(ReportScheduler.name);

  constructor(private readonly reportService: ReportService) {}

  // Standard cron expression — equivalent to RecurringJob.AddOrUpdate("daily-report", ..., "0 8 * * *")
  @Cron('0 8 * * *', { timeZone: 'America/New_York' })
  async generateDailyReport(): Promise<void> {
    this.logger.log('Generating daily report...');
    await this.reportService.generateDaily();
  }

  // CronExpression enum provides common expressions without magic strings
  @Cron(CronExpression.EVERY_HOUR)
  async refreshExchangeRates(): Promise<void> {
    await this.reportService.updateExchangeRates();
  }

  // Fixed interval — equivalent to a Timer-based BackgroundService
  @Interval(30_000) // Every 30 seconds
  async checkExternalApiHealth(): Promise<void> {
    await this.reportService.pingExternalApis();
  }

  // One-shot delayed execution on startup — equivalent to Task.Delay() at start of ExecuteAsync
  @Timeout(5000) // Run once, 5 seconds after application starts
  async seedCacheOnStartup(): Promise<void> {
    this.logger.log('Warming cache after startup...');
    await this.reportService.warmCache();
  }
}

Register the scheduler in its module:

// report.module.ts
import { Module } from '@nestjs/common';
import { BullModule } from '@nestjs/bullmq';
import { ReportScheduler } from './report.scheduler';
import { ReportService } from './report.service';
import { EmailProcessor } from './email.processor';
import { EmailService } from './email.service';

@Module({
  imports: [BullModule.registerQueue({ name: 'email' })],
  providers: [ReportScheduler, ReportService, EmailProcessor, EmailService],
})
export class ReportModule {}

Bull Board — The Monitoring Dashboard

Bull Board is the equivalent of the Hangfire Dashboard. It shows queued, active, completed, failed, and delayed jobs, with the ability to retry failed jobs manually.

// main.ts — add Bull Board
import { NestFactory } from '@nestjs/core';
import { createBullBoard } from '@bull-board/api';
import { BullMQAdapter } from '@bull-board/api/bullMQAdapter';
import { ExpressAdapter } from '@bull-board/express';
import { Queue } from 'bullmq';
import { AppModule } from './app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);

  // Set up Bull Board — access at /admin/queues
  const serverAdapter = new ExpressAdapter();
  serverAdapter.setBasePath('/admin/queues');

  // Get queue instances — you can also inject these via the module
  const emailQueue = new Queue('email', {
    connection: { host: process.env.REDIS_HOST ?? 'localhost' },
  });

  createBullBoard({
    queues: [new BullMQAdapter(emailQueue)],
    serverAdapter,
  });

  // Mount as Express middleware
  const expressApp = app.getHttpAdapter().getInstance();
  expressApp.use('/admin/queues', serverAdapter.getRouter());

  await app.listen(3000);
}

Protect the dashboard route with an auth middleware in production — the Hangfire Dashboard equivalent of UseHangfireDashboard(options => { options.Authorization = [...] }).

Running Processors in a Separate Worker Process

For true isolation (the Node.js equivalent of a .NET Worker Service), run the processor in a separate process that only imports the queue module and processor, with no HTTP server:

// apps/worker/src/main.ts — separate entry point
import { NestFactory } from '@nestjs/core';
import { WorkerModule } from './worker.module';

async function bootstrap() {
  const app = await NestFactory.createApplicationContext(WorkerModule);
  // No HTTP listener — this process only processes queue jobs
  console.log('Worker process started');
}
bootstrap();
// apps/worker/src/worker.module.ts
import { Module } from '@nestjs/common';
import { BullModule } from '@nestjs/bullmq';
import { EmailProcessor } from './email.processor';
import { EmailService } from './email.service';

@Module({
  imports: [
    BullModule.forRoot({ connection: { host: process.env.REDIS_HOST } }),
    BullModule.registerQueue({ name: 'email' }),
  ],
  providers: [EmailProcessor, EmailService],
})
export class WorkerModule {}

This pattern maps directly to the .NET Worker Service project: a separate deployable unit that consumes from the queue without serving HTTP traffic.

CPU-Intensive Work: Worker Threads

This is where Node.js diverges fundamentally from .NET. In .NET, a BackgroundService runs in the thread pool. CPU-intensive work in a background thread does not block request handling threads — the runtime schedules both concurrently.

In Node.js, there is one thread. A CPU-intensive task (image resizing, PDF generation, complex computation) running in a BullMQ processor blocks the entire process — including all other queued job processing. The event loop stalls for the duration.

The solution is worker_threads (the Node.js equivalent of spawning a .NET thread for CPU work):

// image.processor.ts
import { Processor, WorkerHost } from '@nestjs/bullmq';
import { Job } from 'bullmq';
import { Worker } from 'worker_threads';
import * as path from 'path';

export interface ImageResizeJobData {
  inputPath: string;
  outputPath: string;
  width: number;
  height: number;
}

@Processor('image-processing')
export class ImageProcessor extends WorkerHost {
  async process(job: Job<ImageResizeJobData>): Promise<void> {
    // Offload CPU-intensive work to a worker thread — does not block the event loop
    return new Promise((resolve, reject) => {
      const worker = new Worker(
        path.join(__dirname, 'image-resize.worker.js'),
        { workerData: job.data },
      );
      worker.on('message', resolve);
      worker.on('error', reject);
      worker.on('exit', (code) => {
        if (code !== 0) {
          reject(new Error(`Worker stopped with exit code ${code}`));
        }
      });
    });
  }
}
// image-resize.worker.ts — runs in a separate thread
import { workerData, parentPort } from 'worker_threads';
import sharp from 'sharp'; // Example: image processing library

async function resize() {
  const { inputPath, outputPath, width, height } = workerData;

  await sharp(inputPath)
    .resize(width, height)
    .toFile(outputPath);

  parentPort?.postMessage({ success: true, outputPath });
}

resize().catch((err) => {
  throw err;
});

For most I/O-bound work (database queries, HTTP calls, file reads) you do not need worker threads — Node.js’s async I/O handles these efficiently without blocking. Worker threads are only needed for synchronous CPU computation.


Key Differences

Concept.NET (Hangfire / BackgroundService)NestJS (BullMQ / @nestjs/schedule)
Queue backendRedis, SQL Server, or Azure Service BusRedis (BullMQ requires Redis)
Fire-and-forget jobBackgroundJob.Enqueue(...)queue.add('jobName', data)
Delayed jobBackgroundJob.Schedule(..., delay)queue.add('name', data, { delay: ms })
Recurring jobRecurringJob.AddOrUpdate(...)@Cron('0 8 * * *') on a method
Retry configuration[AutomaticRetry(Attempts = 5)]{ attempts: 5, backoff: {...} } in job options
Worker / processor classImplement Execute(PerformContext)Extend WorkerHost, implement process(job)
Job dataMethod parameters, serializedjob.data object, typed via generics
Monitoring dashboardHangfire DashboardBull Board at /admin/queues
ConcurrencyThread pool — free to use CPUEvent loop — CPU work needs worker_threads
CPU-intensive workFine in BackgroundService threadMust use worker_threads or separate process
Cron schedulingRecurringJob.AddOrUpdate(..., Cron.Daily)@Cron(CronExpression.EVERY_DAY_AT_8AM)
In-process timerSystem.Threading.PeriodicTimer@Interval(30000)
One-shot on startupOverride StartAsync()@Timeout(5000)
Separate worker process.NET Worker Service projectSeparate NestFactory.createApplicationContext()
Job continuationBackgroundJob.ContinueJobWith(id, ...)BullMQ Flows (FlowProducer)

Gotchas for .NET Engineers

1. CPU-intensive jobs block the entire Node.js process

This is the most consequential difference from .NET. In a Hangfire worker, you can perform CPU-heavy work in the job’s Execute method without affecting other workers or the web server — the thread pool handles parallelism. In a BullMQ processor running in the main Node.js process, CPU work blocks the event loop.

The symptoms: all other queued jobs stop processing, HTTP requests time out, health checks fail. The job eventually completes, but everything waits.

The solutions, in order of preference:

  1. Use an external service or library that does the CPU work asynchronously at the OS level (for example, sharp for images uses native bindings that release the event loop)
  2. Use worker_threads for synchronous CPU computation (as shown above)
  3. Run the processor in a completely separate process (createApplicationContext)

Rule of thumb: if the operation takes more than 10ms of synchronous JavaScript execution (not I/O wait), move it to a worker thread.

2. @nestjs/schedule cron jobs are not persisted and do not survive restarts

Hangfire stores recurring job schedules in the database. If your server restarts at 07:58 and a job was scheduled for 08:00, Hangfire will run it when the server comes back. @nestjs/schedule cron jobs exist only in memory — if the process restarts mid-schedule, the next run is determined from the cron expression relative to when the process started, not from when the last run occurred.

For reliable recurring jobs where missed runs matter, enqueue them via BullMQ with a repeatable job instead:

// Persistent repeating job — survives process restarts
await this.reportQueue.add(
  'daily-report',
  {},
  {
    repeat: {
      pattern: '0 8 * * *', // Cron expression
      tz: 'America/New_York',
    },
    jobId: 'daily-report-unique', // Prevents duplicate registrations on restart
  },
);

BullMQ stores repeatable job schedules in Redis, so a restart does not lose the schedule.

3. Multiple instances will all run @Cron() jobs concurrently

In a .NET deployment with multiple instances, Hangfire’s database-based locking ensures only one instance runs each recurring job. @nestjs/schedule has no such coordination — if you run three instances of your NestJS app, all three will fire the @Cron('0 8 * * *') handler at 08:00. You get three runs instead of one.

The solutions:

  • Use BullMQ repeatable jobs (Redis-coordinated, runs once across all instances)
  • Add a distributed lock around the cron handler:
@Cron(CronExpression.EVERY_DAY_AT_8AM)
async generateDailyReport(): Promise<void> {
  // Use ioredis SET NX EX as a distributed lock
  const acquired = await this.redis.set(
    'lock:daily-report',
    '1',
    'EX', 300,  // Lock expires after 5 minutes
    'NX',       // Only set if not exists
  );

  if (!acquired) {
    return; // Another instance got the lock
  }

  try {
    await this.reportService.generateDaily();
  } finally {
    await this.redis.del('lock:daily-report');
  }
}

4. BullMQ requires Redis — there is no SQL Server or in-memory backend

Hangfire supports SQL Server, PostgreSQL, and Redis backends. BullMQ only supports Redis. If your infrastructure does not include Redis, either add it (it is cheap and widely supported, including on Render, Railway, and AWS ElastiCache) or use @nestjs/schedule for scheduling and accept its limitations.

There is no in-memory BullMQ option. For integration tests, use a real Redis instance (via Docker or a test container) or mock the queue entirely.

5. Job data must be serializable — class instances lose their methods

Hangfire serializes job arguments to JSON when enqueuing. BullMQ does the same. In Hangfire, this is enforced at the call site because you pass method arguments. In BullMQ, you pass a plain object — but if you accidentally pass a class instance (with methods and getters), only the serializable data properties survive. Methods and computed properties are gone when the processor receives the job.

// Wrong — the class instance is serialized to JSON, methods are lost
class UserRegistration {
  constructor(public email: string, public firstName: string) {}
  getDisplayName() { return this.firstName; }  // This will be gone
}

await this.queue.add('welcome', new UserRegistration('a@b.com', 'Alice'));
// In the processor: job.data.getDisplayName is undefined

// Correct — use plain objects
await this.queue.add('welcome', {
  email: 'a@b.com',
  firstName: 'Alice',
} satisfies WelcomeEmailJobData);

Always use plain object literals for job data. Define the shape with a TypeScript interface or type, and use satisfies to catch mismatches at the call site.


Hands-On Exercise

Build an order fulfillment system that processes orders asynchronously.

Requirements:

  1. Create an orders BullMQ queue and an OrderProcessor that handles three job types:

    • validate-payment: checks payment status, throws if payment is declined (triggers BullMQ retry)
    • reserve-inventory: decrements stock, marks order items as reserved
    • send-confirmation: sends confirmation email via EmailService
  2. Create an OrderService that, when placeOrder() is called:

    • Saves the order with status: 'pending'
    • Enqueues validate-payment with a 2-retry policy and 5-second exponential backoff
    • Chains reserve-inventory (enqueue only if validate-payment completes, using BullMQ Flows or a completion handler)
  3. Create a CleanupScheduler with a @Cron('0 2 * * *') job that deletes orders older than 90 days with status: 'cancelled'.

  4. Add a @Interval(60_000) method that checks for orders stuck in pending status for more than 10 minutes and logs an alert.

  5. Mount Bull Board at /admin/queues and protect it with a basic auth middleware that reads credentials from environment variables.

Stretch goal: Identify the validate-payment step as potentially CPU-bound (imagine it involves RSA signature verification). Rewrite the handler to offload it to a worker_thread. Measure the event loop impact before and after using perf_hooks.monitorEventLoopDelay().


Quick Reference

Task.NETNestJS
Register queueHangfire UseHangfire()BullModule.registerQueue({ name: 'q' })
Enqueue jobBackgroundJob.Enqueue(...)queue.add('name', data)
Delayed jobBackgroundJob.Schedule(..., delay)queue.add('name', data, { delay: ms })
Recurring via cronRecurringJob.AddOrUpdate(...)@Cron('0 8 * * *') on a method
Persistent recurringHangfire recurring (SQL/Redis backed)queue.add('name', {}, { repeat: { pattern: '...' } })
Interval timerPeriodicTimer or Task.Delay loop@Interval(30000)
One-shot on startoverride StartAsync()@Timeout(5000)
Define processorImplement Execute(PerformContext)@Processor('q') class P extends WorkerHost
Process jobsExecute() methodasync process(job: Job<T>): Promise<void>
Inject queueConstructor injection@InjectQueue('q') private queue: Queue
Job retry[AutomaticRetry(Attempts = 5)]{ attempts: 5, backoff: { type: 'exponential' } }
Job event hooksIServerFilter@OnWorkerEvent('completed')
CPU-intensive workThread pool (automatic)worker_threads (manual)
Monitoring UIHangfire DashboardBull Board (@bull-board/api)
Multi-instance cronHangfire DB locking (automatic)Redis distributed lock (manual)
Separate worker.NET Worker ServicecreateApplicationContext(WorkerModule)
Test queue in CIUseInMemoryStorage()Real Redis via Docker / Testcontainers

Common BullMQ job options:

await queue.add('job-name', data, {
  attempts: 3,
  backoff: {
    type: 'exponential', // or 'fixed'
    delay: 2000,         // Base delay in ms
  },
  delay: 5000,           // Wait before first attempt (ms)
  priority: 1,           // Lower number = higher priority
  removeOnComplete: { count: 100 },  // Keep last N completed jobs
  removeOnFail: { count: 50 },
  jobId: 'unique-id',    // Deduplication key — prevents duplicate jobs
});

Common @nestjs/schedule expressions:

@Cron('* * * * * *')               // Every second
@Cron('0 * * * * *')               // Every minute
@Cron('0 0 * * * *')               // Every hour
@Cron('0 0 8 * * *')               // Daily at 08:00
@Cron('0 0 8 * * 1')               // Every Monday at 08:00
@Cron(CronExpression.EVERY_HOUR)   // Every hour (enum)
@Cron(CronExpression.EVERY_DAY_AT_8AM) // Daily at 08:00 (enum)

Further Reading