Azure Messaging Service#
This post is based on the official Azure documentations (Asynchronous messaging options, Compare Azure messaging services, Enterprise integration using message broker and events, Azure Well-Architected Framework) and describes a resume of differences and uses cases for Azure messaging service, including Service Bus, Event Grid, Event Hubs. The official documentations are very good and comprehensive, this post is for my personal reference as a quick reminder.
Message Types#
Type | Sub Type | Messaging service | Usage | Example |
---|---|---|---|---|
command | message | Service Bus | - The message contains the data that triggered the message pipeline. - A command is a high-value message and must be delivered at least once. If a command is lost, the entire business transaction might fail. - The producer might expect the consumer to acknowledge the message and report the results of the operation. | Order processing and financial transactions |
event | event straming | Event Hub | - Related events in a sequence, or a stream of events, over a period of time. - Available either as data streams or bundled event batches. - Can capture the streaming data into a AVRO file for processing and analysis. - Telemetry - Distributed data streaming | Event streaming from IoT devices |
event | Event distribution (discrete notification) | Event Grid | - Status changes notification. - To announce discrete facts. - The message informs the consumer that an action has taken place without expectations that the event result in any action. - Event Grid isn't a data pipeline, and doesn't deliver the actual object that was updated. - The consumer only needs to know that something happened. - The event data has information about what happened but doesn't have the data that triggered the event. Send emails upon CURD operations. | - Azure Resource Manager raises events when it creates, modifies, or deletes resources. A subscriber of those events could be a Logic App that sends alert emails. Event Grid. - For example, an event notifies consumers that a file was created. It may have general information about the file, but it doesn't have the file itself. |
Comparison#
Below comparison table is not finished
Features | Service Bus | Event Hub (real-time data streaming platform with native Apache Kafka support) | Event Grid |
---|---|---|---|
core components | -queue: fifo queue, each message has manx one consumer, competing consumer pattern, Load-leveling queue patter, available for all plans - topic/subscription: for one-to-many, no queue used, FIFO ordering is not enforced across subscriptions, but maintained inside a subscription by leveraging message session. publish-subscribe pattern, not available in Basic plan | event subscription | event subscription |
deliveray policy | At least once delivery of an message | At least once delivery of an event | At least once delivery of an event |
ordering (fifo) | y | y by partition | n |
pull model | -polling by SDK (polling is costy if we dont have many messages) - pushing by Azure Functions with Service Bus trigger - proxied push with event grid (need service bus in premium tier) | pull - As events are received, Event Hubs appends them to the stream. A subscriber manages its cursor and can move forward and back in the stream, select a time offset, and replay a sequence at its pace. | push with event handlers: - Webhooks. Azure Automation runbooks and Logic Apps are supported via webhooks. - Azure functions - Event Hubs - Service Bus queues and topics - Relay hybrid connections - Storage queues |
reliable (no lost if failed communication) | y | ||
resilient (new consumer can read message already read by a failed consumer, as long as there's no ACK by the failed consumer') | |||
guaranteed delivery (for a message, only one consumer can read) | y Azure Service Bus duplicate message detection | ||
duplicate detection | y | ||
at least one delivery | y | ||
checkpoint | y | ||
dead-letter queue (DLQ) | y | y | |
retry | y default to 10, max 2000 | y default to 30 times, and max to 30 times | |
expiration | y default to 24 hours, and max to 24 hours | ||
filter | y filter in a subscription | ||
retention | almost unlimited, functional maximum boundary of the C# Timespan, which corresponds to slightly over 10675199 days. | 7 days | |
partitioning | y For topic/subcription only, not for queue | y - For example, several IoT devices send device data to an event hub. The partition key is the device identifier. As events are ingested, Event Hubs moves them to separate partitions. Within each partition, all events are ordered by time. | |
capture | y store the event stream to an Azure Blob storage or Data Lake Storage. Capture stores all events ingested by Event Hubs and is useful for batch processing. You can generate reports on the data by using a MapReduce function. Captured data can also serve as the source of truth. | ||
support Apache Kafka client | y Event Hubs for Apache Kafkakafka broker = event hub namespace kafka topic = event hub | ||
throughput | less than Event Bub | event-hubs-quotas ingesting millions of events per second. The events are only appended to the stream and are ordered by. Scale in Event Hubs is controlled by how many throughput units (TUs) or processing units you purchase. | 10,000,000 events per second per region. The first 100,000 operations per month are free. |
autoscale | y | y | |
disaster recovery | y | y | |
use cases | financial order | pull data from Event Hubs for the purposes of transformation and statistical analysis. UseAzure Stream Analytics and Apache Spark for complex processing such as aggregation over time windows or anomaly detection. | - logs - telemetry - invoke an Azure Function when a blob storage is created or deleted - IoT MQTT |
size | - queue max size: 5GB | ||
cost |
Patterns#
- Messaging Bridge pattern
- Competing Consumers pattern: Apache Kafka doesn't have this feature.
- Priority Queue pattern
- Queue-based Load Leveling pattern
- Retry pattern
- Scheduler Agent Supervisor pattern
- Choreography pattern
- Claim-Check pattern.