Windows Azure Service Bus Topics Billing - azure

I need details related to Windows Azure Service Bus Topics Billing. For example.
Am I gonna be charged for what my applicatons publish or for what my applications receive?
For example. Lets say that I have one publisher and 5 topics. On each topics there are 1000 mesages per second, where every message is 1KB in size.
On the other side, I have one subscriber that is subscribed on only one topic and also have applied filter, so it receives only 10 messages per second, instead of 1000.
On the publisher side we have. 5 * 1000 msg/s x 60*60*24*30 * 1KB = 12 960 000 000 messages * 1KB for five topics in one month.
On the subscriber side we have 1 * 10 msg/s x 60*60*24*30 * 1KB = 25 920 000 messages * 1KB.
So, Am I gona be charged for A or B?
A: 12 960 000 000 messages * 1KB
B: 25 920 000 messages * 1KB

I found this article very helpful in understanding the pricing structure: http://msdn.microsoft.com/en-us/library/windowsazure/hh667438.aspx
In essence, putting a message on to a queue counts as one message. Reading a message from a queue (or trying to read) also counts as one message.
In the case of topics and subscribers, putting the message on the topic is one message and each subscriber reading a message is also one message.
In your example you would be charged for 12 960 000 000 + 25 920 000 = 12985920000 messages. Or ~$13k - which isn't too bad considering you are pushing about 12TB through a transactional queueing system.
Do note that you should use the built-in long-polling support to read the queue, as you will be charged for trying to read an empty queue.
Also bear in mind that there is a nominal charge for obtaining an authentication token, so make sure your code does not obtain a new token for each put or get. See the cost table at the bottom of this article: http://msdn.microsoft.com/en-us/library/hh767287%28VS.103%29.aspx

You will be charged for A+B...
Multiple deliveries of the same message (for example, message fan out
to multiple listeners or message retrieval after abandon, deferral, or
dead lettering) will be counted as independent messages. For example,
in the case of a topic with three subscriptions, a single 64 KB
message sent and subsequently received will generate four billable
messages (one “in” plus three “out”, assuming all messages are
delivered to all subscriptions).
Refer MSDN for more info : http://msdn.microsoft.com/en-us/library/hh667438.aspx#BKMK_SBv2FAQ2_6

Related

Azure Function with Event Hub trigger receives weird amount of events

I have an Event Hub and Azure Function connected to it. With small amounts of data all works well, but when I tested it with 10 000 events, I got very peculiar results.
For test purposes I send into Event hub numbers from 0 to 9999 and log data in application insights and in service bus. For the first test I see in Azure that hub got exactly 10 000 events, but service bus and AI got all messages between 0 and 4500, and every second message after 4500 (so it lost about 30%). In second test, I got all messages from 0 to 9999, but every second message between 3500 and 3200 was duplicated. I would like to get all messages once, what did I do wrong?
public async Task Run([EventHubTrigger("%EventHubName%", Connection = "AzureEventHubConnectionString")] EventData[] events, ILogger log)
{
int id = _random.Next(1, 100000);
_context.Log.TraceInfo("Started. Count: " + events.Length + ". " + id); //AI log
foreach (var message in events)
{
//log with ASB
var mess = new Message();
mess.Body = message.EventBody.ToArray();
await queueClient.SendAsync(mess);
}
_context.Log.TraceInfo("Completed. " + id); //AI log
}
By using EventData[] events, you are reading events from hub in batch mode, thats why you see X events processing at a time then next seconds you process next batch.
Instead of EventData[] use simply EventData.
When you send events to hub check that all events are sent with the same partition key if you want try batch processing otherwise they can be splitted in several partitions depending on TU (throughput units), PU (Processing Units) and CU (Capacity Units).
Egress: Up to 2 MB per second or 4096 events per second.
Refer to this document.
Throughput limits for Basic, Standard, Premium..:
There are a couple of things likely happening, though I can only speculate with the limited context that we have. Knowing more about the testing methodology, tier of your Event Hubs namespace, and the number of partitions in your Event Hub would help.
The first thing to be aware of is that the timing between when an event is published and when it is available in a partition to be read is non-deterministic. When a publish operation completes, the Event Hubs broker has acknowledged receipt of the events and taken responsibility for ensuring they are persisted to multiple replicas and made available in a specific partition. However, it is not a guarantee that the event can immediately be read.
Depending on how you sent the events, the broker may also need to route events from a gateway by performing a round-robin or applying a hash algorithm. If you're looking to optimize the time from publish to availability, taking ownership of partition distribution and publishing directly to a partition can help, as can ensuring that you're publishing with the right degree of concurrency for your host environment and scenario.
With respect to duplication, it's important to be aware that Event Hubs offers an "at least once" guarantee; your consuming application should expect some duplicates and needs to be able to handle them in the way that is appropriate for your application scenario.
Azure Functions uses a set of event processors in its infrastructure to read events. The processors collaborate with one another to share work and distribute the responsibility for partitions between them. Because collaboration takes place using storage as an intermediary to synchronize, there is an overlap of partition ownership when instances are scaled up or scaled down, during which time the potential for duplication is increased.
Functions makes the decision to scale based on the number of events that it sees waiting in partitions to be read. In the case of your test, if your publication pattern increases rapidly and Functions sees "the event backlog" grow to the point that it feels the need to scale by multiple instances, you'll see more duplication than you otherwise would for a period of 10-30 seconds until partition ownership normalizes. To mitigate this, using an approach of gradually increasing speed of publishing over a 1-2 minute period can help to smooth out the scaling and reduce (but not eliminate) duplication.

What happens to events in event hub after stream analytics does it works and routes them to service bus?

I have following scenario:
The event hub (EH1) is configured with a retention policy of 7 days.
Producers publish events to EH1.
The events from EH1 are routed from stream analytics (SA) (after performing certain calculations over 1 hour time windows) to service bus, which gets both raw events (as messages) as well as summarized calculations.
Lets say over 24 hour period of day 1, producers publish 1 million events to EH1.
SA kicks in and routes the raw events as well as summarized calculations (over 1 hour periods) to service bus.
Assume that after day 1, there are no events pushed to EH1 for next 15 days.
Questions:
How long will the 1 million raw events (from day 1) stay in EH1?
Will those 1 million raw events (from day 1) be still there on day 2 (after 1st hour) through day 7 (because the retention policy is 7)? Or will they be gone after day 1 when SA is done processing all those events? If neither, what else happens?
What metrics should I look at in EH1 to prove what ever the answer is to both (1) and (2)?
First of all, you should take a look at the consumer group first.
In short, when consumers(like any app or code which are used to receive events from eventhub) read events, it must read the events via a consumer group(we named it cg_1 here) -> then for the next time, you read events from cg_1 again, the events(which you have already read) will not be read again.
But if you switch to another consumer group(like you newly create a consumer group named cg_2), you can read all the data(even though the data has been read from cg_1) again.
So for your questions:
#1:
Since you have configured the retention policy of 7 days, the events(raw data) will be kept in eventhub for 7 days. If the events have been received via a consumer group, you cannot receive it again via this consumer group. But you can use another consumer group to receive the data again.
#2:
Similar to question 1, the raw events will be stored in eventhub according to the retention days you have configured.
#There is no such metrics, but you can easily write client codes, and create a new consumer group, then read the data to check if it's there.

Does IoTHub delay messages by a batching interval in a custom endpoint to Azure Storage?

I am sending some messages in a pipeline using Azure IoT Edge. There is a custom endpoint (say, GenericEndpoint) that I have set up, which will send/put the messages to Azure Blob storage. I am using a route to push the device messages to the specific endpoint GenericEndpoint.
The batch frequency of GenericEndpoint is set at 60 seconds. So 1 batch creates 1 single file with some messages, in the container specified.
Lets say, there are N messages in a single blob batch file (say, blobX) in the specific container. If I take the average of the difference between the IoTHub.EnqueuedTime(i) of each message i, in blobX and the 'Creation Time' of blobX, and call it AVG, I get:
I think, this essentially gives me the average time that those N messages spent in iothub before being written in the blob storage. Now what I observe here is that, if p and q are respectively the first and last message written in blobX, then
But since the batching interval was set to 60 seconds, I would expect this average or AVG to be approximately near 30 seconds. Because, if the messages are written as soon as they arrive, then the average for each batch file would be near 30 seconds.
But in my case, AVG ≈ 90 seconds, which suggests the messages wait for atleast approximately one batching interval (60 seconds in this case) before being considered for a particular batch.
Assumption: When a batch of messages are written in a blob file, they are written all at once.
My question:
Is this delay of one batch interval or 60 seconds intentional? If yes, then I assume it will change on changing the batching interval to say 100 seconds.
If, no, then, does it usually take 60 seconds to process a message in iothub and then send it through a route to a custom endpoint? Or am I looking at this from a completely wrong angle?
I apologize beforehand if my question seems confusing.

Send million events to Azure event hub every 5 seconds

I have a simple requirement, 1 million devices need to send a simple heartbeat to the event hub every 5 seconds, that works out to 200000 events per second. Since 1 throughput unit supports only 1000 events\sec, do I really need to purchase 200 throughput units to implement a simple heartbeat mechanism?
I'm really wondering about the claim of event hub supporting millions of events per second, how is that possible if a throughput unit only supports 1000. I need a HUGE number of throughput units and that's going to burn every last dollar. Unless I'm really missing something.

Should the event hub have same number of partitions as throughput units?

For Azure event hub 1 though put unit equals 1MB/sec ingress. So it can take 1000 messages of 1 KB. If I select 5 or more throughput units would I be able to ingest 5000 messages/ second of 1KB size with 4 partitions? What would be egress in that case? I am not sure about limitation on Event Hub partition, i read that it is also 1MB/sec. But then does that mean to use event hub effectively i need to have same number of partitions?
Great question.
1 Throughput Unit (TU) means an ingress limit of 1 MB/sec or 1000 msgs/sec - whichever happens first. You pay for TUs and you can change TUs as per your load requirements. This is your knob to control the bill. And TUs are set on a given Event Hubs Namespace!
When you buy 1 TU for an EventHubs Namespace and create a number of EventHubs in it, the the limit of 1 MB/sec or 1000 msgs/sec applies cumulatively across them. The limit also applies to each partition individually. Although, sometimes you might get lucky in some regions where load is low.
Consider these principles while deciding on no. of partitions in eventhub for your service:
The intent of Partitions is to offer high-availability. If you are sending to Eventhubs and you want the sends to succeed NO MATTER WHAT you should create multiple partitions and send using EventHubClient.Send (which doesn't confine the send to a particular partition).
The no. of partitions will determine how fat the event pipe is & how fast/parallelly you can receive & process the events. If you have 10 partitions on your EventHub - it's capacity is effectively capped at 10 TUs. You can create 10 epoch receivers in parallel & consume & process events. If you envision that the EventHub that you are currently creating now can quickly grow 10-fold create as many partitions and keep the TU's matching the current load. Analogy here is having multiple lanes on a freeway!
Another thing to note is, a TU is configured at namespace level. And, one Event Hubs namespace can have multiple EventHubs in it and each EventHub can have a different no. of partitions.
Answers:
If you select 5 or more TUs on the Namespace and have only 1 EventHub with 4 partitions you will get a max. of 4 MB/sec or 4K msgs/sec.
Egress max will be 2X of ingress (8 MBPS or 8K msgs/sec). In other words, you could create 2 patterns of receives (e.g. slow and fast) by creating 2 consumer groups. If you need more than 2X parallel receives then you will need to by more TUs.
Yes, ideally you will need more partitions than TUs. First model your partition count as mentioned above. Start with 1 TU while you are developing your solution. Once done, when you are doing load testing or going live, increase TUs in tune with your load. Remember, you could have multiple EventHubs in a Namespace. So, having 20 TUs at Namespace level and 10 EventHubs with 4 partitions each can deliver 20 MB/sec across the Namespace.
More on EventHubs
One partition goes to one TPU. Think of TPUs as a processing engine. You can't take advantage of more TPUs than you have partitions. If you have 4 partitions, you can't use more than 4 TPUs.
It's typical to have more partitions than TPUs, for the following reasons
You can scale the number of TPUs up if you have a lot of traffic, but you can't change the number of partitions
You can't have more concurrent readers than you have partitions. If you want to have 5 concurrent readers, you need 5 partitions.
As for throughput, the limits are 1 MB ingerss/2 MB egress per TPU. This covers the typical scenario where each event is sent both to cold storage (eg a database) and Stream analytics or an event processor for analysis, monitoring etc.

Resources