4 min read Emadideen Ghannam
BullMQ, RabbitMQ, or Postgres LISTEN/NOTIFY: picking a queue in 2026
Three queue-shaped problems, three different defaults. After running all three in production, here is the decision tree.
I run all three in production right now. One side project uses BullMQ for reminder jobs. Another side project uses RabbitMQ for a scrapers-to-processors-to-platform pipeline. An old enterprise stocktaking system I built years ago used Postgres LISTEN/NOTIFY for low-volume real-time events.
Every time someone asks me which queue to use, I have to remember they are asking about three different problems pretending to be one.
The decision tree
- “I need durable jobs with retries, schedules, rate limiting, and a dashboard.” -> BullMQ on Redis.
- “I need fan-out, topic exchanges, multi-language consumers, dead-letter queues, and ordering per partition.” -> RabbitMQ.
- “I need real-time pings between two Postgres-coupled services and don’t want to add infra.” -> Postgres
LISTEN/NOTIFY.
That’s the whole tree. The rest is gotchas.
BullMQ for jobs
If the work item has a known shape, owns its own retry policy, and you want a UI to inspect failures, BullMQ on Redis is hard to beat. NestJS integration is one decorator. The BullMQ board gives you a working dashboard with no code.
@Processor('reminders')
export class ReminderProcessor {
@Process('send-daily')
async sendDaily(job: Job<{ userId: string }>) {
const { userId } = job.data;
await this.notify(userId);
}
}
The gotcha: BullMQ assumes monotonic clocks across workers. On Redis Cluster, if a node’s clock drifts more than a few seconds, delayed jobs fire early or late. Run your Redis on a node with NTP, not a free-tier instance you forgot about.
The other gotcha: BullMQ’s “stalled job” detection assumes workers heartbeat every 30 seconds. If your worker does a 60-second blocking call, the job is moved to “stalled” and replayed. Either keep your jobs short or tune the lock duration.
RabbitMQ for pipelines
A data platform I’m building has scrapers, processors, and a platform API. Scrapers fail constantly (Cloudflare, rate limits, schema drift). Processors are CPU-bound (Python). The platform is HTTP-bound (NestJS). They have to be decoupled or one bad scraper takes down everything.
RabbitMQ shines when:
- The same message has multiple consumers (fan-out).
- Topics matter - “scraped.business.region” routed to region-specific enrichers.
- You need ordering per partition (consistent hashing exchange).
- You want a real dead-letter queue with replay.
- Consumers are written in different languages.
That pipeline runs Python processors (NLP, geocoding) and TypeScript publishers. RabbitMQ does not care.
The gotcha: prefetch and ack subtlety. If you set prefetch=1, you serialise. If you set prefetch=100 and your handler crashes, you redeliver 100 messages. The right value depends on the average handler time and you have to measure.
The other gotcha: a poorly-tuned RabbitMQ will eat memory until the broker stops accepting publishes. Set vm_memory_high_watermark and disk_free_limit consciously, not as defaults.
LISTEN/NOTIFY for cheap real-time
If you already have Postgres in the stack, two services are coupled to the same DB, and you need a low-volume real-time signal between them, LISTEN/NOTIFY is the cheapest possible option. No extra infra, no Redis, no Rabbit.
The old stocktaking system used it: an RFID reader writes a row, a Postgres trigger fires NOTIFY rfid_scan, '<json>', a LISTENer in the warehouse-floor app picks it up and updates the dashboard.
import { Client } from 'pg';
const client = new Client({ connectionString: process.env.DATABASE_URL });
await client.connect();
await client.query('LISTEN rfid_scan');
client.on('notification', msg => {
const payload = JSON.parse(msg.payload!);
applyScan(payload);
});
Five-line consumer. No broker. No deployment. Postgres handles the fan-out across listening connections.
The gotcha that bit me twice: NOTIFY payloads are limited to 8000 bytes. The driver does not warn you - it just truncates or fails silently depending on the client. Send IDs, not full objects. Have the listener fetch the full row by ID.
The other gotcha: LISTEN is a session-bound subscription. If your connection drops, you lose every notification fired during the gap. For at-most-once notifications - “the dashboard should refresh now” - that’s fine. For at-least-once delivery, use a real queue.
When the answer is “more than one”
The data platform I mentioned uses RabbitMQ for the scraper-to-processor pipeline AND BullMQ for daily refresh schedules AND LISTEN/NOTIFY for an admin-UI live feed. Each picked for the shape of its problem. They are not redundant. They are three different jobs.
Take
There is no “best queue.” There are three different problems pretending to be one. Match the queue to the problem, not the resume.