RabbitMQ Queue Types Explained

RabbitMQ uses the Mnesia DB(to be replaced with Khepri in RabbitMQ 4.0) to persist a cluster’s metadata. To persist messages, it uses a variety of queue types.

This blog will look at each queue type in RabbitMQ — their use-cases, core features and storage implementation, and to begin, let’s lift the lid on …

Queues in RabbitMQ

A queue in RabbitMQ is a named First In First Out(FIFO) buffer that stores messages for consumer applications. These applications can create, use, and delete queues.

Queues in RabbitMQ can be durable, temporary, or auto-deleted. Durable queues remain until they are deleted. Temporary queues exist until the server shuts down. Auto-deleted queues are removed when they are no longer in use.

To cater for different use-cases, RabbitMQ offers a variety of queue types:

Moving on, let’s peel the layers of each queue type…

Classic Queues

Classic Queues are the original queue type in RabbitMQ, they use a non-replicated FIFO (First-In-First-Out) implementation.

There are two implementations(versions 1 & 2) of Classic Queues. These versions only differ in how data is stored and retrieved from disk, but all features are available in both implementations.

Classic Queues use-cases

By virtue of being non-replicated, Classic Queues are ideal for applications where high availability and data safety are less critical. They are the default queue type unless configured otherwise.

Classic Queues features

Some of the key features supported in Classic Queues are, but not limited to:

  • Message and Queue TTL: Supports Time-To-Live settings for both messages and queues.
  • Queue Length Limits: Allows setting limits on the number of messages in the queue.
  • Message and Consumer Priority: Supports prioritisation of messages and consumers.
  • Dead Letter Exchanges: Supports dead-lettering, except for at-least-once dead-lettering.
  • QoS Prefetch: Supports both per-consumer and global QoS prefetch — global QoS prefetch will be removed in RabbitMQ 4.0
  • Queue Exclusivity

Classic Queues storage implementation

Classic Queues in RabbitMQ use an on-disk message store to persist actual message payloads. Alongside the message store, there's an on-disk index for each queue. This index tracks the location of messages in the message store and their position in the queue.

In version 2 of Classic Queues, RabbitMQ introduces a per-queue message store. Smaller messages are typically written to this store, while larger messages go to the shared message store.

Limitations of Classic Queues

As stated earlier, by virtue of being non-replicated, Classic Queues fall short in scenarios requiring high availability and data safety. Classic Mirrored Queues were introduced to address these limitations by replicating data across multiple nodes. Regardless, Classic Mirrored Queues come with their own set of challenges:

  • Performance is slower than it should be because messages are replicated using a very inefficient algorithm.
  • Furthermore, there are issues with Classic Queues’s data synchronisation model:
    • The basic problem is that when a broker goes offline and comes back again, any data it had in mirrors gets discarded. Now that the mirror is back online but empty, the administrators have a decision to make: to synchronise the mirror or not. "Synchronise" means replicate the current messages from the leader to the mirror.
    • But here is the catch: Synchronisation is blocking, causing the whole queue to become unavailable.

You can read the section “what is wrong with mirrored queues anyway?” in our blog to learn more about the design flaws of Classic Mirrored Queues.

To overcome these drawbacks and provide a stronger solution for high availability and data safety, RabbitMQ introduced Quorum Queues.

Quorum Queues?

The RabbitMQ Quorum Queue is a modern queue type, which implements a durable, replicated FIFO queue based on the Raft consensus algorithm.

Quorum Queues in RabbitMQ are designed for high availability and data safety. They replicate data across multiple nodes to ensure that messages are not lost, even if some nodes fail.

Quorum Queues use-cases

Quorum Queues are ideal for critical applications where data loss is unacceptable. They prioritise fault tolerance and data safety over minimal latency and advanced queueing features present in Classic Queues.

Quorum Queues Features

Some of the key features supported in Quorum Queues are, but not limited to:

  • Data Safety and Replication: Quorum Queues ensure messages are replicated across multiple nodes, providing strong guarantees against data loss.
  • Durability: Quorum Queues are always durable, meaning they persist data to disk, unlike classic queues, which can be non-durable.
  • Dead Letter Exchanges: They support dead lettering with at-least-once dead-lettering.
  • Poison Message Handling: Quorum Queues can handle poison messages, automatically managing repeated message redeliveries.

Quorum Queues storage implementation

In Quorum Queues, a shared Write-Ahead-Log (WAL), also called a journal file, is used on each node to persist all operations, including new messages. This log captures actions as they happen.

The operations stored in the WAL are kept in memory and simultaneously written to disk. When the current WAL file reaches a certain size (default 512 MiB), it's flushed to a segment file on disk, and the memory used by those log entries is released. These segment files are compacted over time, especially as consumers acknowledge deliveries.

Limitations of Quorum Queues

Classic and Quorum Queues are great! In fact, Quorum Queues simplify the replication issues around Classic Mirrored Queues profoundly. That notwithstanding, there are scenarios where Classic and Quorum Queues crawl on their knees:

  • They deliver the same message to multiple consumers by binding a dedicated queue for each consumer. Clearly, this could create a scalability problem.
  • They erase read messages making it impossible to re-read(replay) them or grab a specific message in the queue.
  • They perform poorly when dealing with millions of messages because they are optimised to gravitate towards an empty state.

The RabbitMQ team introduced Stream Queues in RabbitMQ 3.9 to mitigate the above-listed challenges.

Stream Queues?

Stream queues in RabbitMQ are a persistent and replicated data structure that, like traditional queues, buffer messages from producers for consumers to read. However, Streams differ from queues in two ways:

  • How producers write messages to them
  • And how consumers read messages from them

Under the hood, Streams model an append-only log that's immutable. In this context, this means messages written to a Stream can't be erased, they can only be read. To read messages from a Stream in RabbitMQ, one or more consumers subscribe to it and read the same message as many times as they want.

Stream Queues use-cases

The use cases where streams shine include:

  • Fan-out architectures: Where many consumers need to read the same message.
  • Replay & time-travel:  Where consumers need to reread the same message or start reading from any point in the stream.
  • Large Volumes of Messages: Streams are great for use cases where large volumes of messages need to be persisted.
  • High Throughput: RabbitMQ Streams process relatively higher volumes of messages per second.

Stream Queues features

Some of the key features supported in Stream Queues are, but not limited to:

  • Persistent and Replicated: Stream queues always save data to disk and replicate it across nodes, ensuring high data durability.
  • Non-Destructive Read: Consumers can read the same messages repeatedly without removing them from the queue.
  • High Throughput: Stream-specific features and a dedicated binary protocol plugin provide optimal performance.
  • Inherent Lazy Behaviour: Messages are stored directly on disk and do not consume memory until read.
  • No Non-Durable or Exclusive Queues: Stream queues are always durable and cannot be exclusive or temporary.
  • No TTL or Queue Length Limits: Instead of TTL and length limits, streams use retention policies to manage data lifecycle.

Stream Queues storage implementation

Stream queues persist messages using fixed-size segment files on disk. Each message published to a stream queue goes into these segment files.

Once a segment file reaches its predefined size limit (default 500,000,000 bytes), it's closed in favour of a new one. This approach keeps file sizes manageable and optimises access and retrieval times. Each stream queue maintains an index, tracking the location of messages within these segment files.

You can learn more about Stream Queues in our three parts series.

MQTT QoS 0 Queue

RabbitMQ supports the MQTT protocol via the MQTT Plugin. By default, the MQTT plugin creates Classic Queues and it could be configured to create Quorum Queues as well.

These traditional queues write data to disk and sometimes replicate it across nodes, potentially causing bottlenecks in message flow. In certain MQTT scenarios, the requirement is to just send messages to online subscribers without the overhead of persistence and/or replication.

This begs the question: how can we eliminate this bottleneck— can we have no queues at all? This is exactly why the MQTT QoS 0 queue type was introduced in RabbitMQ 3.12.

Unlike classic queues, quorum queues, and streams, the MQTT QoS 0 queue type functions as a “pseudo” queue, paradoxically eliminating the underlying queue process. In other words, it does not operate as a separate Erlang process, nor does it store messages on disk. Instead, it uses the subscribing client's connection process mailbox.

This means that messages are sent directly to the MQTT connection process of the subscribing client, bypassing the traditional queue mechanism and ensuring immediate delivery to any “online” MQTT subscribers. This approach significantly reduces latency and resource usage.

MQTT QoS 0 Queues use-cases

This queue type is more ideal for the following MQTT use-cases:

  • Large Fan-Out Architectures: Efficiently broadcasts messages to millions of devices.
  • Low-Latency Messaging: Perfect for situations where minimal end-to-end latency is crucial.
  • Ephemeral Messaging: Suitable when message persistence is not needed, and occasional message loss is acceptable.

Wrap up

RabbitMQ's diverse queue types— classic, quorum, mqtt qos 0, and stream queues are designed to address different use cases.

Classic queues provide a straightforward, non-replicated FIFO solution suitable for less critical applications. Quorum queues offer data safety and high availability through replication, making them ideal for critical tasks where data loss is not an option. Stream queues, with their append-only log and non-destructive read semantics, cater to high throughput scenarios and applications requiring message replay and persistence. Lastly, MQTT QoS 0 queues bypass the traditional queue mechanism entirely, delivering messages directly to online subscribers for fast, low-latency messaging in large-scale and ephemeral use cases.

Ready to start using RabbitMQ in your architecture? CloudAMQP is one of the world's largest RabbitMQ cloud hosting providers. In addition to RabbitMQ, we have also created our in-house message broker, LavinMQ - we benchmarked its throughput at around  1,000,000 messages/sec.

Easily create a free LavinMQ    or   free RabbitMQ  instance on CloudAMQP. All are available after a quick and easy signup.

CloudAMQP - industry leading RabbitMQ as a service

Start your managed cluster today. CloudAMQP is 100% free to try.

13,000+ users including these smart companies