In our app (Azure hosted) we produce invoices, these have to be injected into an on premise accounting software. It is not possible to host an API that would be reachable from the Azure to post the invoices to.
Is it possible to create an exe that runs on-premise an that get's triggered by Azure Q-messages like WebJobs can ? Once triggered retrieve the invoice from a blob-storage-object.
Other suggestions are also welcome.
One important thing I want to mention is that even WebJobs poll the queue at predetermined interval (I believe the default is 30 seconds). Azure Queues don't support triggering mechanism like you think.
What you want to do is entirely possible though. What you could do is write a Windows Service, that essentially wakes up at a predetermined interval and checks for messages in the queue. If it finds messages, then it processes those messages otherwise go back to sleep again.
Related
I’m in need of some second opinions and guidance on how to use Azure Functions in combination with Azure Service Bus in the scenario described below. Coding is not an issue its about selecting the most appropriate method. Sadly, I have not found any good example of this online so now I’m reaching out for some help.
Scenario
I have an ecommerce customer that is sending a few thousand orders a day to an ERP system. The normal day operations are not an issue, but we would like to make the solution more robust to handle for example “Black Friday” surges. Currently the website can hold x amount of orders before that database is full and is forced to close or send order downstream. Currently the website sends order directly into the ERP system and it is this part I want to decouple with Azure Server Bus Queues. With this decoupling we can continue pushing new orders to the queue and consuming these at our own pace in the ERP without flooding any system.
My thoughts about how to set this up
The website can send messages directly to the Service Bus Queue. An Azure Function is bound to trigger on every new message in the queue and will send that message to the ERP system.
Same as above however the website first sends a message to an Azure Function that puts it into the queue.
The website sends messages to the queue like in point 1 or 2. Instead of binding a function to the queue we setup a scheduled function. The function will run frequently and send 1 message to the ERP system per run.
The website sends messages to the queue like in point 1 or 2. Here we do not send messages to the ERP system but instead the ERP system is the one who reads the queue. Do not like this approach but its possible to do and easy to administrate by ERP users.
Questions
If I go with point 1 or 2 above should the function responsible for delivering the message to the ERP system send 1 or multiple orders per trigger?
If I go with point 1 or 2 it should still be possible to flood the ERP system since they most likely trigger at the same time they get put in?
If the ERP system is down and the queue grows, do I need a separate scheduled function to handle the queue until it is empty?
We do not have to discuss the dead letter queue here, that is another topic.
How would you approach this or if you have done a similar solution what method did you use?
Thank you for your guidance much appriciated!
We've learned a lot in the past couple of years working with Azure Functions and Service Bus to solve similar scenarios you mentioned above. You're definitely on the right track regards to wanting to decouple in case of a surge. To give you some peace of mind with your choice to use Azure Service Bus, we normally push hundreds of events a minute through our topics and subscriptions and it holds up pretty well.
Let me just share some of the lessons we learned:
The concurrent number of incoming requests within the same second
was one of our breaking points. The website when written properly
will easily accommodate multiple incoming requests but we learned
about "port exhaustion" related to outbound web requests to our
Azure Function. Review the scope and lifetime of your web client and
the limits of your app service plan / web server.
If you choose to use a consumption plan for your Azure Function, be
aware that it sometimes takes a long time to start. Whatever is
hitting the function will have to implement retries (probably a good
practice anyway).
A Service Bus Message has a size limit (which can be increased, but
there's still a limit). We randomly hit it with one of our payloads
that contains a bulk of information. Know the worst case scenario
payload size you may encounter.
In the event something goes wrong and there are tens of thousands of
messages in the queue, there is no easy way to query what's in
there. Make sure you're fine with that otherwise consider
doing fast writes into a database that can be queried.
The Azure Function can be triggered by Service Bus and can spawn
multiple concurrent executions of the code (which is desired) and
with a limit. Be aware of any limitations with code to update your
ERP. You will have no control over Service Bus triggers.
Be conscious about the function's storage account, functions with
same name will have their trigger settings and locks stepping on
each other (dev vs. prod environment).
Connections to Azure Service Bus will sometimes fail, just the
nature of services hosted in the cloud. It only happens a few times
and recovers after a few seconds.
Consider doing this:
Website -> Azure API Management Gateway -> Azure Function A -> Service Bus -> Azure Function B -> ERP
Azure API Management with AppInsights enabled is a nice extra layer allowing you to secure, monitor, and route to your Function A. In cases where you need to route incoming requests to some emergency bucket it's a life saver.
Consider allowing function 1 to accept an array of your items. Enable AppInsights, add code for telemetry, providing preview of throughput in terms of orders.
Function B with a configurable timer trigger and some app configuration for number of messages to process from the queue. Allows you to throttle flow of data to your ERP. This may be debatable as you won't be able to scale this function out with multiple instances, but I'm assuming the original concern was to control the pace. Also enable same AppInsights, telemetry, logging, etc.
I'm hoping I don't draw too much criticism from this. We learned the hard way and eventually received some really good guidance from Azure architects and engineers later.
I have looked through documentation for WebJobs, Functions and Logic Apps in Azure but I cannot find a way to schedule a one-time execution of a process through code. My users need to be able to schedule notifications to go out at a specific time in the future (usually within a few hours or a day from being scheduled). Everything I am reading on those processes is using CRON expressions which is not designed for one-time executions. I realize that I could schedule the job to run on intervals and check the database to see if the rest of the job needs to run, but I would like to avoid running the jobs unnecessarily if possible. Any help is appreciated.
If it is relevant, I am using C#, ASP.NET MVC Core, App Services and a SQL database all hosted in Azure. My plan was to use Logic apps to check the database for a scheduled event and send notifications through Twilio, SendGrid, and iOS/Android push notifications.
One option is to create Azure Service Bus Messages in your App using the ScheduledEnqueueTimeUtc property. This will create the message in the queue, but will only be consumable at that time.
Then a Logic App could be listening to that Service Bus Queue and doing the further processing, e.g. SendGrid, Twilio, etc...
HTH
You could use Azure Queue trigger with deferred visibility. This will keep the message invisible for a specified timeout. This conveniently acts as a timer.
CloudQueue queueOutput; // same queue as trigger listens on
var strjson = JsonConvert.SerializeObject(message); // message is your payload
var cloudMsg = new CloudQueueMessage(strjson);
var delay = TimeSpan.FromHours(1);
queueOutput.AddMessage(cloudMsg, initialVisibilityDelay: delay);
See https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.storage.queue.cloudqueue.addmessage?view=azure-dotnet for more details on this overload of AddMessage.
You can use Azure Automation to schedule tasks programmatically using REST API. Learn about it here.
You can use Azure Event Grid also. Based on this article you can “Extend existing workflows by triggering a Logic App once there is a new record in your database".
Hope this helps.
The other answers are all valid options, but there are some others as well.
For Logic Apps you can build this behavior into the app as described in the Scheduler migration guide. The solution described there is to create a logic app with a http trigger, and pass the desired execution time to that trigger (in post data or query parameters). The 'Delay Until' block can then be used to postpone the execution of the following steps to the time passed to the trigger.
You'd have to change the logic app to support this, but depending on the use case that may not be an issue.
For Azure functions a similar pattern could be achieved using Durable Functions which has support for Timers.
For my new project every component is going to be deployed in Azure. I have a 3rd party application that processes events using RabbitMQ and I want to subscribe to these events and process them to store the data in the events in my own database.
What would be the best way to go? Using webjobs and Writing my own Custom Trigger/ Binder for RabbitMQ?
Thanks for the advice in advance
Based on your requirement, I assume that Azure WebJob is an ideal approach to achieve your purpose. In that case, you could use a WebJob as a consumer client to subscribe the events and process the data. Please try to create a WebJob and following the link provided by Mitra to subscribe the event and implement your logic processes in the WebJob.
Please pay attention that WebJob run as background processes in the context of an Azure Web App. In order to keep your WebJob running continuously, you need to be running in standard mode or highly and enable the "Always On" setting.
Consideration of scaling, you could use the Azure Websites scale feature to scale extra WebJobs instances. For scaling, you could refer to this tutorial.
For having subscription based routing, you can use Topics in Rabbitmq. Using topics you can push events to specific queues and then consumers at those queues can do processing to write data into the database. The only thing to take care of is to have a correct routing key for each queue.
That way you can have subscription based mechanism. The only thing with this approach will be that for each event there will be one queue.
The benefit of having one queue per event is it will be easy to keep track of events and so easy debugging.
If the number of events are very large then you can have only one queue but after consuming the message you have to trigger the event.
Here is the link for the reference:
https://www.rabbitmq.com/tutorials/tutorial-five-python.html
Is there a way to retrieve azure storage queue messages that are hidden? Background - I have been searching for an app/cmdlet/third party tool that would let me backup the entire queue including hidden messages (for troubleshooting purposes) but unable to find one.
I have also considered writing a powershell script to download all messages, but couldn't find a way to retrieve hidden ones.
Help will be greatly appreciated!
While I don't know if such a tool exists for Azure Storage Queues, have you considered Azure Service Bus Topics and Subscriptions for your queueing system? Under a topic and subscription model, you can set up the following architecture:
[Topic] Place messages on this queue. They get replicated to each subscription.
[Subscription1] Your backup process reads this queue and persists messages.
[Subscription2] Your application reads from this queue for normal operation.
This has a few benefits:
it decouples your backup and production systems, making it less likely that, for example, a faulty backup script ends up impacting production behavior
Locked ("hidden") messages apply only to the given subscription, so your backup queue will never have to deal with a message that is hidden or locked by the production queue.
Similar setups can certainly be achieved using storage queues, but Azure Service Bus has this sort of behavior built in.
Simple answer is that you can't download all messages from a queue. Messages that are hidden are hidden from all other callers including any 3rd party apps so you can't read those messages other than from the application which made them hidden in the 1st place.
You mention the reason for wanting to backup the queue as being for troubleshooting problems, depending on where your issues lie it might be worth taking at look at Azure Storage's Analytics capabilities. The logging infrastructure actually allows you to log every single transaction and greatly simplifies many troubleshooting scenarios. Take a look here for more information: http://blogs.msdn.com/b/windowsazurestorage/archive/tags/analytics+2d00+logging+_2600_amp_3b00_+metrics/.
Can anybody explain the difference between Azure Web Jobs and Azure Scheduler
Azure Web Jobs
Only available on Azure Websites
It is used to run code at particular intervals. E.g. a console application every day
Used to trigger and run workloads.
Mainly recommended for workloads that either scale with the website or are relatively small.
Can be persistently running if "Always On" selected, otherwise you will get the 20 min timeout.
The code that needs to be run and schedule are defined together.
Azure Scheduler
Is not tied to Websites or Cloud Services
It allows you to call a website or add a message to a storage queue
Used for triggering events or triggering small workloads (e.g. add to queue), usually to trigger larger workloads
Mainly recommended for triggering more complex workloads.
This is only a trigger, and a separate function listening to trigger events (e.g. queue's) needs to be coded separately.
For many instances I prefer to use the scheduler to push to a storage queue and a worker role on each instance takes off the queue. This keeps tasks controlled granularly and can also move up or down in scale outside of your website.
With WebJobs they scale up and down with your site and hence your background tasks can become over taxed if your website is experiencing low traffic and scaled down.
Azure Scheduler - Provides a way to easily schedule http calls in a well-defined schedule, like every hour, every Friday at 9:00 am, Once a day, ...
Azure WebJobs - Provides a way to run small to medium work load (in the form of a script: .exe, .cmd, .sh, .js, ...) at the same context of an Azure Website (but can be hosted even with an empty website).
While a WebJob can run continuously (with a process that has a while loop) and Azure will make sure this WebJob is always running (with "Always On" set).
There is also an integration between Azure scheduler and Azure WebJobs where you have a WebJob that is running some finite work and the schduler is responsible for scheduling this work (invoking the WebJob).
So in summary, the scheduler is about scheduling work and WebJobs is about running work load.