RabbitMQ uses the Mnesia DB(to be replaced with Khepri in RabbitMQ 4.0) to persist a cluster’s metadata. To persist messages, it uses a variety of queue types.
This blog will look at each queue type in RabbitMQ — their use-cases, core features and storage implementation, and to begin, let’s lift the lid on …
Queues in RabbitMQ
A queue in RabbitMQ is a named First In First Out(FIFO) buffer that stores messages for consumer applications. These applications can create, use, and delete queues.
Queues in RabbitMQ can be durable, temporary, or auto-deleted. Durable queues remain until they are deleted. Temporary queues exist until the server shuts down. Auto-deleted queues are removed when they are no longer in use.
To cater for different use-cases, RabbitMQ offers a variety of queue types:
Moving on, let’s peel the layers of each queue type…
Classic Queues
Classic Queues are the original queue type in RabbitMQ, they use a non-replicated FIFO (First-In-First-Out) implementation.
There are two implementations(versions 1 & 2) of Classic Queues. These versions only differ in how data is stored and retrieved from disk, but all features are available in both implementations.
Classic Queues use-cases
By virtue of being non-replicated, Classic Queues are ideal for applications where high availability and data safety are less critical. They are the default queue type unless configured otherwise.
Classic Queues features
Some of the key features supported in Classic Queues are, but not limited to:
- Message and Queue TTL: Supports Time-To-Live settings for both messages and queues.
- Queue Length Limits: Allows setting limits on the number of messages in the queue.
- Message and Consumer Priority: Supports prioritisation of messages and consumers.
- Dead Letter Exchanges: Supports dead-lettering, except for at-least-once dead-lettering.
- QoS Prefetch: Supports both per-consumer and global QoS prefetch — global QoS prefetch will be removed in RabbitMQ 4.0
- Queue Exclusivity
Classic Queues storage implementation
Classic Queues in RabbitMQ use an on-disk message store to persist actual message payloads. Alongside the message store, there's an on-disk index for each queue. This index tracks the location of messages in the message store and their position in the queue.
In version 2 of Classic Queues, RabbitMQ introduces a per-queue message store. Smaller messages are typically written to this store, while larger messages go to the shared message store.
Limitations of Classic Queues
As stated earlier, by virtue of being non-replicated, Classic Queues fall short in scenarios requiring high availability and data safety. Classic Mirrored Queues were introduced to address these limitations by replicating data across multiple nodes. Regardless, Classic Mirrored Queues come with their own set of challenges:
- Performance is slower than it should be because messages are replicated using a very inefficient algorithm.
-
Furthermore, there are issues with Classic Queues’s data synchronisation model:
- The basic problem is that when a broker goes offline and comes back again, any data it had in mirrors gets discarded. Now that the mirror is back online but empty, the administrators have a decision to make: to synchronise the mirror or not. "Synchronise" means replicate the current messages from the leader to the mirror.
- But here is the catch: Synchronisation is blocking, causing the whole queue to become unavailable.
You can read the section “what is wrong with mirrored queues anyway?” in our blog to learn more about the design flaws of Classic Mirrored Queues.
To overcome these drawbacks and provide a stronger solution for high availability and data safety, RabbitMQ introduced Quorum Queues.
Quorum Queues?
The RabbitMQ Quorum Queue is a modern queue type, which implements a durable, replicated FIFO queue based on the Raft consensus algorithm.
Quorum Queues in RabbitMQ are designed for high availability and data safety. They replicate data across multiple nodes to ensure that messages are not lost, even if some nodes fail.
Quorum Queues use-cases
Quorum Queues are ideal for critical applications where data loss is unacceptable. They prioritise fault tolerance and data safety over minimal latency and advanced queueing features present in Classic Queues.
Quorum Queues Features
Some of the key features supported in Quorum Queues are, but not limited to:
- Data Safety and Replication: Quorum Queues ensure messages are replicated across multiple nodes, providing strong guarantees against data loss.
- Durability: Quorum Queues are always durable, meaning they persist data to disk, unlike classic queues, which can be non-durable.
- Dead Letter Exchanges: They support dead lettering with at-least-once dead-lettering.
- Poison Message Handling: Quorum Queues can handle poison messages, automatically managing repeated message redeliveries.
Quorum Queues storage implementation
In Quorum Queues, a shared Write-Ahead-Log (WAL), also called a journal file, is used on each node to persist all operations, including new messages. This log captures actions as they happen.
The operations stored in the WAL are kept in memory and simultaneously written to disk. When the current WAL file reaches a certain size (default 512 MiB), it's flushed to a segment file on disk, and the memory used by those log entries is released. These segment files are compacted over time, especially as consumers acknowledge deliveries.
Limitations of Quorum Queues
Classic and Quorum Queues are great! In fact, Quorum Queues simplify the replication issues around Classic Mirrored Queues profoundly. That notwithstanding, there are scenarios where Classic and Quorum Queues crawl on their knees:
- They deliver the same message to multiple consumers by binding a dedicated queue for each consumer. Clearly, this could create a scalability problem.
- They erase read messages making it impossible to re-read(replay) them or grab a specific message in the queue.
- They perform poorly when dealing with millions of messages because they are optimised to gravitate towards an empty state.
The RabbitMQ team introduced Stream Queues in RabbitMQ 3.9 to mitigate the above-listed challenges.
Stream Queues?
Stream queues in RabbitMQ are a persistent and replicated data structure that, like traditional queues, buffer messages from producers for consumers to read. However, Streams differ from queues in two ways:
- How producers write messages to them
- And how consumers read messages from them
Under the hood, Streams model an append-only log that's immutable. In this context, this means messages written to a Stream can't be erased, they can only be read. To read messages from a Stream in RabbitMQ, one or more consumers subscribe to it and read the same message as many times as they want.
Stream Queues use-cases
The use cases where streams shine include:
- Fan-out architectures: Where many consumers need to read the same message.
- Replay & time-travel: Where consumers need to reread the same message or start reading from any point in the stream.
- Large Volumes of Messages: Streams are great for use cases where large volumes of messages need to be persisted.
- High Throughput: RabbitMQ Streams process relatively higher volumes of messages per second.
Stream Queues features
Some of the key features supported in Stream Queues are, but not limited to:
- Persistent and Replicated: Stream queues always save data to disk and replicate it across nodes, ensuring high data durability.
- Non-Destructive Read: Consumers can read the same messages repeatedly without removing them from the queue.
- High Throughput: Stream-specific features and a dedicated binary protocol plugin provide optimal performance.
- Inherent Lazy Behaviour: Messages are stored directly on disk and do not consume memory until read.
- No Non-Durable or Exclusive Queues: Stream queues are always durable and cannot be exclusive or temporary.
- No TTL or Queue Length Limits: Instead of TTL and length limits, streams use retention policies to manage data lifecycle.
Stream Queues storage implementation
Stream queues persist messages using fixed-size segment files on disk. Each message published to a stream queue goes into these segment files.
Once a segment file reaches its predefined size limit (default 500,000,000 bytes), it's closed in favour of a new one. This approach keeps file sizes manageable and optimises access and retrieval times. Each stream queue maintains an index, tracking the location of messages within these segment files.
You can learn more about Stream Queues in our three parts series.
MQTT QoS 0 Queue
RabbitMQ supports the MQTT protocol via the MQTT Plugin. By default, the MQTT plugin creates Classic Queues and it could be configured to create Quorum Queues as well.
These traditional queues write data to disk and sometimes replicate it across nodes, potentially causing bottlenecks in message flow. In certain MQTT scenarios, the requirement is to just send messages to online subscribers without the overhead of persistence and/or replication.
This begs the question: how can we eliminate this bottleneck— can we have no queues at all? This is exactly why the MQTT QoS 0 queue type was introduced in RabbitMQ 3.12.
Unlike classic queues, quorum queues, and streams, the MQTT QoS 0 queue type functions as a “pseudo” queue, paradoxically eliminating the underlying queue process. In other words, it does not operate as a separate Erlang process, nor does it store messages on disk. Instead, it uses the subscribing client's connection process mailbox.
This means that messages are sent directly to the MQTT connection process of the subscribing client, bypassing the traditional queue mechanism and ensuring immediate delivery to any “online” MQTT subscribers. This approach significantly reduces latency and resource usage.
MQTT QoS 0 Queues use-cases
This queue type is more ideal for the following MQTT use-cases:
- Large Fan-Out Architectures: Efficiently broadcasts messages to millions of devices.
- Low-Latency Messaging: Perfect for situations where minimal end-to-end latency is crucial.
- Ephemeral Messaging: Suitable when message persistence is not needed, and occasional message loss is acceptable.
Wrap up
RabbitMQ's diverse queue types— classic, quorum, mqtt qos 0, and stream queues are designed to address different use cases.
Classic queues provide a straightforward, non-replicated FIFO solution suitable for less critical applications. Quorum queues offer data safety and high availability through replication, making them ideal for critical tasks where data loss is not an option. Stream queues, with their append-only log and non-destructive read semantics, cater to high throughput scenarios and applications requiring message replay and persistence. Lastly, MQTT QoS 0 queues bypass the traditional queue mechanism entirely, delivering messages directly to online subscribers for fast, low-latency messaging in large-scale and ephemeral use cases.
Ready to start using RabbitMQ in your architecture? CloudAMQP is one of the world's largest RabbitMQ cloud hosting providers. In addition to RabbitMQ, we have also created our in-house message broker, LavinMQ - we benchmarked its throughput at around 1,000,000 messages/sec.
Easily create a free LavinMQ or free RabbitMQ instance on CloudAMQP. All are available after a quick and easy signup.
Common questions regarding Queue Types
How do Stream Queues differ from Classic and Quorum Queues?
Stream Queues - Optimized for high-throughput, long-term message storage and replay, making them ideal for real-time applications. They efficiently handle large-scale data streams and allow for message replay, enabling you to reprocess events whenever needed.
Classic Queues - Provide basic, temporary message storage suited for general or low-priority tasks. They are lightweight and efficient, but offer less resilience compared to Quorum Queues and Stream Queues.
Quorum Queues - High-reliability, fault-tolerant queues designed for critical, durable message processing. They ensure strong message consistency through replication, making them ideal for high-availability systems that require data preservation even in the event of node failures.
Can different queue types be used together on the same RabbitMQ node?
Yes. RabbitMQ allows you to mix and match queue types based on the specific needs of your applications. Each queue type serves different use cases, and using them together provides flexibility in balancing performance, durability, and throughput.
What happens if I choose the wrong queue type for my needs when setting up RabbitMQ?
Choosing the wrong queue type can cause performance issues, reliability concerns, or inefficient resource use. For instance, using Quorum Queues for high-throughput, low-priority tasks adds unnecessary overhead, while Classic Queues for critical tasks may lack the fault tolerance needed, risking data loss during failures.
When should I use a Stream Queue?
You should use a Stream Queue when your application requires:
- High-throughput message processing: Ideal for scenarios with millions of messages per second.
- Long-term message storage: If you need to retain messages for extended periods and retrieve them later.
- Message replay: When you need the ability to reprocess or replay past messages, such as in event sourcing or log analysis.
- Real-time data streaming: Best for use cases like real-time analytics, monitoring, or event-driven architectures where large volumes of data need to be processed continuously.
Stream Queues are well-suited for data-intensive, real-time applications that demand both speed and flexibility. If you’d like to learn more about Streams, visit our blog series RabbitMQ Streams and Replay Features.
Is it possible to change queue type after declaring a queue?
Once a queue is declared with a specific type, its type cannot be changed. This is because the queue type determines its underlying storage mechanism and behavior, which are fundamental to its operation.
If you need to use a different queue type, you'll need to:
- Declare a new queue with the desired type.
- Migrate or republish messages from the old queue to the new one, if necessary.
- Delete the old queue, if it's no longer needed.
Changing the queue type would involve different underlying implementations and configurations, which is why it's not supported directly.