Windows Azure time based event processing - azure

I'm am trying to design an application where a user can create a task that must be run at a certain time. To compound problems this task needs to have an associated countdown function so that at say 1 minute before the time it must "come to life" and notify the user.
This solution will be developed in .net using Windows Azure and Azure Message bus. My question is once the message/task is delivered via the bus how do I architect the solution to handle potentially thousands of independent events that must be processed with second or sub second accuracy? I can't have 1 worker role per user that would be insanely expensive ...
I was thinking of using something like Quartz.net in the backend to handle the job processing and scheduling but surely there must be a better way with Azure ...
Any guidance and assistance would be greatly appreciated
Thanks

Related

How to defer / schedule a retry policy on message receiving from Service Bus?

I'm using the new Microsoft library for the service bus (Microsoft.Azure.ServiceBus) for .Net core and I'm having trouble trying to find a solution for my problem, this is what I'm trying to accomplish:
My application process a number of different events and sometimes because of a dependency going down I have to store this failed event for later processing, so I created a queue for that and I can send to it without any problems but I'm having a difficult time to receive the event from this queue because I want someway to look at this queue every 5 minutes to see if there is a failed event to process. Is there a way to "schedule" the retrieval of these messages on the queue? I'm reading some guides but they are scheduling the sending process not the receiving one.
Doing a background thread for this application would be a tremendous and hellish task so I would like to know if there is a more practical way to accomplish that, any suggestions are greatly appreciated!
Thank you.

Trigger multiple concurrent service bus trigger azure functions without time degradation

I have a service bus trigger function that when receiving a message from the queue will do a simple db call, and then send out emails/sms. Can I run > 1000 calls in my service bus queue to trigger a function to run simultaneously without the run time being affected?
My concern is that I queue up 1000+ messages to trigger my function all at the same time, say 5:00PM to send out emails/sms. If they end up running later because there is so many running threads the users receiving the emails/sms don't get them until 1 hour after the designated time!
Is this a concern and if so is there a remedy?
FYI - I know I can make the function run asynchronously, would that make any difference in this scenario?
1000 messages is not a big number. If your e-mail/sms service can handle them fast, the whole batch will be gone relatively quickly. Few things to know though:
Functions won't scale to 1000 parallel executions in this case. They will start with 1 instance doing ~16 parallel calls at the same time, and then observe how fast the processing goes, then maybe add a second instance, wait again etc.
The exact scaling behavior is not publicly described and can change over time. Thus, YMMV, and you need to test against your specific scenario.
Yes, make the functions async whenever you can. I don't expect a huge boost in processing speed just because of that, but it certainly won't hurt.
Bottom line: your scenario doesn't sound like a problem for Functions, but if you need very short latency, you'll have to run a test before relying on it.
I'm assuming you are talking about an Azure Service Bus Binding to an Azure Function. There should be no issue with >1000 Azure Functions firing at the same time. They are a Serverless runtime and should be able to scale greatly if you are running under a consumption model. If you are running the functions in a service plan, you may be limited by the service plan.
In your scenario you are probably more likely to overwhelm the downstream dependencies: the database and SMS sending system, before you overwhelm the Azure Functions infrastructure.
The best thing to do is to do some load testing, and monitor the exceptions coming out of the connections to the database and SMS systems.

Tasks that need to be performed on a certain date in Azure

I am developing an application using Azure Cloud Service and web api. I would like to allow users that create a consultation session the ability to change the price of that session, however I would like to allow all users 30 days to leave the session before the new price affects the price for all members currently signed up for the session. My first thought is to use queue storage and set the visibility timeout for the 30 day time limit, but this seems like this could grow the queue really fast over time, especially if the message should not run for 30 days; not to mention the ordering issues. I am looking at the task scheduler as well but the session pricing changes are not a recurring concept but more random. Is the queue idea a good approach or is there a better and more efficient way to accomplish this?
The stuff you are trying to do should be done with a relational database. You can use timestamps to record when prices for session changed. I wouldn't use a queue at all for this. A queue is more for passing messages in a distributed system. Your problem is just about tracking what prices changed on what sessions and when. That data should be modeled in a database.
I think this scenario is more suitable to use Azure Scheduler. Programatically create a Job with one time recurrence with set date as 30 days later to run once. Once this job gets triggered automatically by scheduler, assign an action to callback to one of your API/Service to do the price & other required updates and also remove this Job from the scheduler as part of this action to have a clean jobs list. Anyways premium plan of Azure Scheduler Job Collection will give you unlimited number of jobs to run.
Hope this is exactly what you were looking for...
I would consider using Azure WebJobs. A WebJob basically gives you the ability to run a .NET console application within the context of an Azure Web App. It can be run on demand, continuously, or in response to a reoccurring schedule. If your processing requirements are low and allow for it they can also run in the same process that your Web App is running in to save you $$$ as they are free that way.
You could schedule the WebJob to run once or twice per day and examine the situation and react as is appropriate. Since it's really just a .NET worker role you have ultimate flexibility.

Several worker roles more expensive?

What scenario is less expensive in $$$ using Windows Azure? And is it better to separate the two tasks. E-mails are rarely sent, but chat messages are posted all the time.
Having one worker role processing e-mails gotten from the Azure Queue every 10 seconds, and one worker role processing posted chat messages from the Azure every 1 second.
Having one generic worker role that processes both e-mails sending and chat messages every 1 second.
Worker roles are most efficient when run at or near to full CPU capacity- you are, after all, paying by CPU hour for them. A useful way to achieve this is to combine worker roles such that all of your background jobs end up being performed in a single role.
A great way to run single worker role type architectures is to use some sort of generic worker role pattern- basically a plugin pattern whereby the worker role reads a message off the queue and uses some metadata encoded into the message (or the name of the queu) to determine the type of processing it requires. It will then go to blob storage to retrieve the .NET assembly to perform that type of processing, instantiate this into a new appdomain and marshall the context into that assembly for processing.
This is covered in the Asynchronous Workloads session in the WIndows Azure Platform Training Kit. This also contains a hands on lab that guides you through a sample implementation of one of these types of approaches.
The folks from Lokad have a really elegant implementation including all the polish and administration mechanisms that you'd need if you did this properly. Their implementation is New BSD licensed and was the winner of the MSFT Azure Partner of the YEar award last year. It's an essential part of almost every Azure project that I build. Highly recommended and trivial to integrate. http://code.google.com/p/lokad-cloud/
So in short, I prefer a generic worker role implemented as a plugin type pattern with dynamics type loading and instantiation.
This all depends on your scaling strategy and how many instances you're going to need to run to handle your load.
If you're planning to take advantage of supported SLA (99.999 uptime) you will need at least 2 instances for every role.
Thus, if you split them up, you will need at least 4 instances. If you keep them together, you'll need at least 2.
Processing 1 email per 10s and 1 chat message per second does not sound like a lot and I don't think you'll need more than 2 instances to handle everything.
However, if processing power gets to be lop-sided (i.e. chat messages need more computing power than email messages) and total load exceeds 4 instances, i suggest splitting them up, so that you can scale the two processes separately
You're charged based on how many hours you're running and how many CPU cores you're running. So if you spin up four small VMs all doing the same thing versus two small VMs doing one thing and two doing another, the cost is the same.

WF4 Affinity on Windows Azure and other NLB environments

I'm using Windows Azure and WF4 and my workflow service is hosted in a web-role (with N instances). My job now is find out how
to do an affinity, in a way that I can send messages to the right workflow instance. To explain this scenario, my workflow (attached) starts with a "StartWorkflow" receive activity, creates 3 "Person" and, in a parallel-for-each, waits for the confirmation of these 3 people ("ConfirmCreation" Receive Activity).
I then started to search how the affinity is made in others NLB environments (mainly looked for informations about how this works on Windows Server AppFabric), but I didn't find a precise answer. So how is it done in others NLB environments?
My next task is find out how I could implement a system to handle this affinity on Windows Azure and how much would this solution cost (in price, time and amount of work) to see if its viable or if it's better to work with only one web-role instance while we wait for the WF4 host for the Azure AppFabric. The only way I found was to persist the workflow instance. Is there other ways of doing this?
My third, but not last, task is to find out how WF4 handles multiple messages received at the same time. In my scenario, this means how it would handle if the 3 people confirmed at the same time and the confirmation messages are also received at the same time. Since the most logical answer for this problem seems to be to use a queue, I started looking for information about queues on WF4 and found people speaking about MSQM. But what is the native WF4 messages handler system? Is this handler really a queue or is it another system? How is this concurrency handled?
You shouldn't need any affinity. In fact that's kinda the whole point of durable Workflows. Whilst your workflow is waiting for this confirmation it should be persisted and unloaded from any one server.
As far as persistence goes for Windows Azure you would either need to hack the standard SQL persistence scripts so that they work on SQL Azure or write your own InstanceStore implementation that sits on top of Azure Storage. We have done the latter for a workflow we're running in Azure, but I'm unable to share the code. On a scale of 1 to 10 for effort, I'd rank it around an 8.
As far as multiple messages, what will happen is the messages will be received and delivered to the workflow instance one message at a time. Now, it's possible that every one of those messages goes to the same server or maybe each one goes to a diff. server. No matter how it happens, the workflow runtime will attempt to load the workflow from the instance store, see that it is currently locked and block/retry until the workflow becomes available to process the next message. So you don't have to worry about concurrent access to the same workflow instance as long as you configure everything correctly and the InstanceStore implementation is doing its job.
Here's a few other suggestions:
Make sure you use the PersistBeforeSend option on your SendReply actvities
Configure the following workflow service options
<workflowIdle timeToUnload="00:00:00" />
<sqlWorkflowInstanceStore ... instanceLockedExceptionAction="AggressiveRetry" />
Using the out of the box SQL instance store with SQL Azure is a bit of a problem at the moment with the Azure 1.3 SDK as each deployment, even if you made 0 code changes, results in a new service deployment meaning that already persisted workflows can't continue. That is a bug that will be solved but a PITA for now.
As Drew said your workflow instance should just move from server to server as needed, no need to pin it to a specific machine. And even if you could that would hurt scalability and reliability so something to be avoided.
Sending messages through MSMQ using the WCF NetMsmqBinding works just fine. Internally WF uses a completely different mechanism called bookmarks that allow a workflow to stop and resume. Each Receive activity, as well as others like Delay, will create a bookmark and wait for that to be resumed. You can only resume existing bookmarks. Even resuming a bookmark is not a direct action but put into an internal queue, not MSMQ, by the workflow scheduler and executed through a SynchronizationContext. You get no control over the scheduler but you can replace the SynchronizationContext when using the WorkflowApplication and so get some control over how and where activities are executed.

Resources