I want to include a lot of records in a queue of Azure.
I don't want to do it one to one. I would like to create a batch process to know if something went wrong.
Because I need to do a rollback proccess if something went wrong.
For example to run a batch of 50 and if the queue gets the 50 records receive a success.
if something went wrong receive a that information.
I know I can include records in a table in a batch way with this command:
cloudTable.ExecuteBatchAsync(tableBatchOperation);
And I saw on internet a way to do it batch process for queues.
But I think this post it is related with performance, more than batch process success or not.
Any idea? any magic library?
AFAIK, It’s not possible to send messages in a batch to a storage queue.
Azure Service Bus on the other hand supports this functionality. You might want to look into it if batching is important for you.
Related
I'm using the new Microsoft library for the service bus (Microsoft.Azure.ServiceBus) for .Net core and I'm having trouble trying to find a solution for my problem, this is what I'm trying to accomplish:
My application process a number of different events and sometimes because of a dependency going down I have to store this failed event for later processing, so I created a queue for that and I can send to it without any problems but I'm having a difficult time to receive the event from this queue because I want someway to look at this queue every 5 minutes to see if there is a failed event to process. Is there a way to "schedule" the retrieval of these messages on the queue? I'm reading some guides but they are scheduling the sending process not the receiving one.
Doing a background thread for this application would be a tremendous and hellish task so I would like to know if there is a more practical way to accomplish that, any suggestions are greatly appreciated!
Thank you.
I'm looking for some best practices on how to implement the following pattern using (micro) services and a service bus. In this particular case Service Fabric services and an Azure service bus instance, but from a pattern point of view that might not even be that important.
Suppose a scenario in which we get a work package with a number of files in it. For processing, we want each individual file to be processed in parallel so that we can easily scale this operation onto a number of services. Once all processing completes, we can continue our business process.
So it's a fan-out, following by a fan-in to gather all results and continue. For example sake, let's say that we have a ZIP file, unzip the file and have each file processed and once all are done we can continue.
The fan-out bit is easy. Unzip the file, for n files post n messages onto a service bus queue and have a number of services handle those in parallel. But now how do we know that these services have all completed their work?
A number of options I'm considering:
Next to sending a service bus message for each file, we also store the files in a table of sorts, along with the name of the originating ZIP file. Once a worker is done processing, it will remove that file from the table again and check if that was the last one. When it was, we can post a message to indicate the entire ZIP was now processed.
Similar to 1., but instead we have the worker reply that it's done and the ZIP processing service will then check if there is any work left. Little bit cleaner as the responsibility for that table is now clearly with the ZIP processing service.
Have the ZIP processing service actively wait for all the reply messages in separate threads, but just typing this already makes my head hurt a bit.
Introduce a specific orchestrator service which takes the n messages and takes care of the fan-out / fan-in pattern. This would still require solution 2 as well, but it's now located in a separate service so we don't have any of this logic (+ storage) in the ZIP processing service itself.
I looked into how service bus might already have a feature of some sort to support this pattern, but could not find anything suitable. Durable functions seems to support a scenario like this, but we're not using functions within this project and I'd rather not start doing so just to implement this one pattern.
We're surely not the first ones to implement such a thing, so I really was hoping I could find some real world advise as to what works and what should be avoided at all cost.
I have a service bus trigger function that when receiving a message from the queue will do a simple db call, and then send out emails/sms. Can I run > 1000 calls in my service bus queue to trigger a function to run simultaneously without the run time being affected?
My concern is that I queue up 1000+ messages to trigger my function all at the same time, say 5:00PM to send out emails/sms. If they end up running later because there is so many running threads the users receiving the emails/sms don't get them until 1 hour after the designated time!
Is this a concern and if so is there a remedy?
FYI - I know I can make the function run asynchronously, would that make any difference in this scenario?
1000 messages is not a big number. If your e-mail/sms service can handle them fast, the whole batch will be gone relatively quickly. Few things to know though:
Functions won't scale to 1000 parallel executions in this case. They will start with 1 instance doing ~16 parallel calls at the same time, and then observe how fast the processing goes, then maybe add a second instance, wait again etc.
The exact scaling behavior is not publicly described and can change over time. Thus, YMMV, and you need to test against your specific scenario.
Yes, make the functions async whenever you can. I don't expect a huge boost in processing speed just because of that, but it certainly won't hurt.
Bottom line: your scenario doesn't sound like a problem for Functions, but if you need very short latency, you'll have to run a test before relying on it.
I'm assuming you are talking about an Azure Service Bus Binding to an Azure Function. There should be no issue with >1000 Azure Functions firing at the same time. They are a Serverless runtime and should be able to scale greatly if you are running under a consumption model. If you are running the functions in a service plan, you may be limited by the service plan.
In your scenario you are probably more likely to overwhelm the downstream dependencies: the database and SMS sending system, before you overwhelm the Azure Functions infrastructure.
The best thing to do is to do some load testing, and monitor the exceptions coming out of the connections to the database and SMS systems.
I've got a series of services that generate events that are being written to an Azure Event Hub. This hub's connected to a StreamAnalytics Job that takes the event information and writes it to an Azure TableStorage and DataLake Store for later analysis by different teams and tools.
One of my services is reporting all events correctly, but the other isn't, after hooking up a listener to the hub I can see the events are being sent without a problem, but they aren't being processed or sent to the sinks on the job.
On the audit logs I see periodic transformation errors for one of the columns that's written to the storage, but seeing the data there's no problem on the format, and I can't seem to find a way to maybe look at the troubled events that are causing this failures.
The only error I see on the Management Services is
We are experiencing issues writing output for output TSEventStore right now. We will try again soon.
It sounds like there may be two issues:
1) The writing to the TableStorage TSEventStore is failing.
2) There are some data conversion errors.
I would suggest trying to troubleshoot one at a time. For the first one, are there any events being written to the TSEventStore? Is there another message in operations logs that may give more detail on why writing is failing?
For the second one, today we don't have a way to output events that have data conversation errors. The best way is by outputting the data only to one sync (data lake) and looking at it there.
Thanks,
Kati
I'm working through a basic tutorial on the ServiceBus. A web role adds objects to a ServiceBus Queue, while a worker role reads those messages off the queue and marks them complete. This is all within the local environment (compute emulator).
It seems like it should be incredibly simple, but I'm seeing the following behavior:
The call QueueClient.Receive() is always timing out.
Some messages are just hanging out in the queue and are not being picked up by the worker.
What could be going on? How can I debug the state of these messages?
You can check the length of the Queue (from the portal or looking at the MessageCount property).
Another possibility is that the messages are DeadLettered. You can read from the DeadLettered subqueue using this sample code.
First of all, please make sure you indeed have some messages in the queue. I would like to suggest you to run the end solution of this tutorial: http://msdn.microsoft.com/en-us/WAZPlatformTrainingCourse_ServiceBusMessaging. If that works fine, please compare your code with the sample code. If that doesn’t work as well, it is likely to be a configuration issue or a network issue. Then I would recommend you to check whether you have properly configured the Service Bus account, and check if you’re able to access internet from your machine.
Best Regards,
Ming Xu.