I'm using the Azure service bus topic in a publish/subscription pattern. I've got one topic and two subscriptions. One for the web site and one for a webjob. This works fine when only running one web site instance.
When Azure scales up to two instances the two webjob instances still use the same subscription so any single message is only handled by one of them which is exactly what I want.
But the same is true for the web sites which is not what I want. Both web site instances need to receive all messages and therefor need their own subscription.
Is there a way to get an "instance" name for the web sites and making the subscription dynamically (and remove the subscription again when the web site is scaled back)?
I could create a subscription called %HOSTNAME%-web or something similar. But how do I remove them afterwards when no longer needed?
If you want all instances to receive a copy of the message, then as you state you will need to refactor so that each instance creates its own subscription.
To implement a "volatile" queue, look to the QueueDescription parameter " AutoDeleteOnIdle". http://msdn.microsoft.com/en-us/library/windowsazure/microsoft.servicebus.messaging.queuedescription.autodeleteonidle.aspx This should give you close to what you're looking for, just be careful to set the TimeSpan to a duration that will ensure you don't miss messages.
Alternative, if you only foresee scaling up to a few instances, you can just set the time to life on your messages appropriately and leave the queues in place to be reused the next time you "scale up" (although using a host name for this likely wouldn't be a good idea). Depending on message volume, the cost might be small compared to the cost of time required to develop an alternative approach.
Related
Need a bit of architectural guidance. I have a set of stateless services that do various functions. My architecture allows for multiple copies of each service to run at the same time (as they are stateless), allowing me to:
scale up as needed for handling larger workloads
have fault-tolerance (if one instance of a service fails, no problem as there will be others to take on that work).
However, I don't want duplication of work.
If Service A, Instance 1 has already taken Job ABC, I don't want Service A, Instance 2, to take on that same job. So, I could avoid this problem by using Azure Service Bus Queues. Only a single worker would get a particular item from the queue and would only be reassigned to another worker, if the worker didn't mark it as complete in a set time.
So what's an appropriate use-case for Topics (Pub/Sub)? It seems like if I ever have multiple copies of the same service, I must rely on Queues. Is that right?
Asked another way, is there a way to use Topics in Azure Service Bus or similar products/services but avoid duplication of work? Also, if there is a way to lock a message (for a short period of time) when using Topics, is it possible to lock that message to just one instance of Service A (so no other instances of Service A will have access to it) but the message will be broadcast to Service B, Service, C, etc.?
is there a way to use Topics in Azure Service Bus or similar
products/services but avoid duplication of work?
Yes, there is. Basically with that you would need to use each subscription as a queue. What you will need to do is define proper filters so that one kind of message is sent to a single subscription (that way it acts as a queue) and have multiple listeners (service instances in your case) listen to a specific subscription only.
Also, if there is a way to lock a message (for a short period of time)
when using Topics, is it possible to lock that message to just one
instance of Service A (so no other instances of Service A will have
access to it) but the message will be broadcast to Service B, Service,
C, etc.?
It is certainly possible to lock a message. For that you will need to fetch messages in Peek-Lock mode. However if multiple subscribers (services) are involved, then only one subscriber will be able to lock the message and access it. For other subscribers, the message will be invisible. You can't have a scenario where one service acquires the lock and other services still receive the message.
Azure function triggers would provide all what you are looking for out of the box.
If you are not leveraging any advanced queuing features of service bus then I would recommend you look at storage queues to save some money.
If you need service bus then you can use service bus triggers.
Hope that helps.
Iam relatively new to Cloud Computing and azure. I was wondering whether you can have more than one web and worker role in an Azure application. If so what advantages can I get using multiple roles and where do they apply?
Yes, you can have more than 1 web or worker role in an Azure Cloud Service. You can have up to 25 different roles per deployment I believe in any mix of Web and Worker roles. See the Azure Subscription and Service Limits, Quotas and Constraints link for more information.
The advantage of having the roles within the same cloud service is simply that within that cloud service they can see all the other roles and instances easily (unless you configure them otherwise). They will all be relatively close to each other within a data center because a cloud service is assigned to a stamp of machines and controlled by a Fabric Controller assigned to that stamp. You can watch this video by Mark Russinovich which sheds more light on the inner workings of Azure and talks a bit about stamps I think. A cloud service is a security boundary as well, so you get some benefits from that encapsulation if you need to do a lot of inter machine communication that ISN'T going across a queue for some reason.
The disadvantage of batching a whole bunch of roles together is that they are tied pretty closely together at that point. You can certainly scale them separately, and you can do updates that target only a single role at a time. However, if you want to deploy changes to multiple roles you may end up having to do a full deployment to all roles (even those that haven't changed) or do updates to single roles one at a time until all the ones you need updated are, which can take some time. Of course, it could be argued that having them in separate cloud services would still have you doing updates concurrently depending on your architecture and/or dependencies.
My suggestion is to group only roles that REALLY belong together in the same solution. These are role that have workloads that are interrelated. Even then, there's nothing stopping you from separating these as well into separate deployments (though you may benefit from the security boundaries that being within the same cloud service). Think about how each role will be updated, and if they would generally be updated together or not. There are many factors in thinking about how to package roles together.
I am just about to move my website, database and a background scheduler onto the Azure platform.
I've to use cloud services with web & worker role. Now my question is that do I need separate instances for each type of role, or one instance is capable of hosting multiple types of role ?
You cannot have a combined web and worker role instance. It can be one or the other. However, it is possible to have the web role do background processing, so it can host the background workload.
See this SO question for a couple of options
Azure WebRole Scheduled Task Every Morning
That talks about running a task each morning. Obviously you can do it more frequently, as appropriate for your application.
Be aware of the scalability limitations of this though. Once your traffic ramps up, it will make sense to break it out into separate web and worker roles.
In fact, even if your background workload is light, it might still make more sense for you to go for a separate architecture from the start and use XS instances for the background processing.
Technically you can't have a role of both types. Yet a web role is just the same as a worker role, it just has IIS configured. So you can merge them into one web role - IIS will run in a separate process and role entry point Run() will run some endless loop for "backend" processing. See this similar question.
This will make scaling more complicated. The whole idea of separate roles (remember you can have not only one web role and one worker role, you can have for example four worker roles and two web roles if that's appropriate for your solution) is that you can scale them separately.
It looks like once you merge two roles into one you no longer can fine scale them. This is not true most of the time - you just have to change metrics.
For example, you wanted to run one web role instance for each thousand HTTP requests per minute and one worker role instance for each ten requests in the backend queue. Okay, this means that each thousand HTTP requests needs the same amount of processing power as ten items in the backend queue. So you craft a new metric that takes both parameters and deduces a number of instances. Like you have five thousand requests per minute and twenty requests in the backend queue - you need seven instances of am merged role.
This won't work for all applications, but most of them will use this approach just fine. The bonus is that you avoid cases when either of the roles in idling because the current load gets onto another role.
I want to create an Azure application which does the following:
User is presented with a MVC 4 website (web role) which shows a list of commands.
When the user selects a command, it is broadcast to all worker roles.
Worker roles process the task, store the results and notify web role
Web role displays the combined results of the worker roles
From what I've been reading there seem to be two ways of doing this: the Windows Azure Service Bus or using Queues. Each worker role also stores the results in the database.
The Service Bus seems more appropriate with its publish/subscribe model, so all worker roles would get the same command and roughly the same time. Queues seem easier to use though.
Can the service bus be used locally with the emulator when developing? I am using a free trial and cannot keep the application constantly whilst still developing. Also, when using queues how can you notify the web role that processing is complete?
I agree. ServiceBus is a better choice for this messaging requirement. You could, with some effort, do the same with queues. But, you'll be writing a lot of code to implement things that the ServiceBus already gives you.
There is not a local emulator for ServiceBus like there is for the Azure Strorage service (queues/tables/blobs). However, you could still use the ServiceBus for messaging between roles while they are running locally in your development environment.
As for your last question about notifying the web role that processing is complete, there are a several ways to go here. Just a few thoughts (not exhaustive list)...
Table storage where the web role can periodically check the status of the unit of work.
Another ServiceBus Queue/topic for completed work.
Internal endpoints. You'll have to have logic to know if it's just an update from worker role N or if it is indicating a completed unit of work for all worker roles.
I agree with Rick's answer, but would also add the following things to think about:
If you choose the Service Bus Topic approach then as each worker role comes online it would need to generate a subscription to the topic. You'll need to think about subscription maintenance of when one of the workers has a failure and is recycled, or any number of reasons why a subscription may be out there.
Telling the web role that all the workers are complete is interesting. The options Rick provides are good ones, but you'll need to think about some things here. It means that the web role needs to know just how many workers are out there or some other mechanism to decide when all have reported done. You could have the situation of five worker roles receieving a message and start working, then one of them starts to repeatedly fail processing. The other four report their completion but now the web role is waiting on the fifth. How long do you wait for a reply? Can you continue? What if you just told the system to scale down and while the web role thinks there are 5 there is now only 4. These are things you'll need to to think about and they all depend on your requirements.
Based on your question, you could use either queue service and get good results. But each of them are going to have different challenges to overcome as well as advantages.
Some advantages of service bus queues is that it provides blocking receipt with a persistent connection (up to 100 connections), it can monitor messages for completion, and it can send larger messages (256KB).
Some advantages of storage queues over the service bus solution is that it's slightly faster (if 15 ms matters to you), you can use a single storage system (since you'll probably be using Storage for blob and table services anyways), and simple auto-scaling. If you need to auto-scale your worker roles based on the load, passing the the requests through a storage queue makes auto-scaling trivial -- you just setup auto-scaling in the Azure Cloud Service UI under the scale tab.
A more in-depth comparison of the two azure queue services can be found here: http://msdn.microsoft.com/en-us/library/hh767287.aspx
Also, when using queues how can you notify the web role that processing is complete?
For the Azure Storage Queues solution, I've written a library that can help: https://github.com/brentrossen/AzureDistributedService.
It provides a proxy layer that facilitates RPC style communication from web roles to worker roles and back through Storage Queues.
Good day,
I would like to ask a conceptual question that has been tearing my mind for a while. There is no right or wrong answer here, probably, but I hope to get better understanding of my options.
Situation:
I have a Windows Azure Application. It consists of ONE Web Role with ONE instance and ONE worker role with TWO instances.
The main focus is on the worker role. It implements Quartz.NET scheduler to perform a task of getting information from a CRM system, making a table out of it and uploading to FTP server, say, every 8 hours. Very simple, nothing fancy.
The web role is used for manually triggering the job if someone needs it to run between 8 hour intervals. Simple user interface with pretty much one button.
However I would like to have a possibility to make it possible to change some configuration options of the worker role from the web role. For example credentials of destination FTP server and schedule of the job e.g. make it run hourly instead of 8 hours. The configuration DOES NOT need to persist if role goes offline. At the moment config is one in a static class.
This wouldn't seem a problem if I was running one worker role instance: I would send a message from web role via queue and change some static variables in the worker role, for example. But what confuses me is the message queues can be only picked up by one role instance, not both at the same time. So I will end up having the job run every 8 hours in one instance and every hour on another.
Is there any way to notify both instances that configuration needs to change?
There are several ways you could accomplish this. Let me offer two, aside from what #Gaurav suggested:
Windows Azure configuration settings. If you store your ftp server name and time interval as configuration settings, and then change the configuration, each role instance can capture and handle the RoleEnvironment.Changing event, and take action based on new settings.
Service bus pub/sub. You can have each role instance subscribe to a configuration change topic. Then, whenever you have a change, publish the change to the topic, and each instance, being subscribers, will receive the message and act accordingly.
One possibility would be to do a "PEEK" at message by your worker role instances instead of "GET" message. That way the message will remain visible to all the instances. Obviously the challenge there would be when to delete the message. Other alternative would be create a separate queue for each worker role instance (you can name the queues so that each worker role instance would actually GET the message from the queue intended for that instance e.g. if your worker role instances are say WorkerRole1_IN_0, WorkerRole1_IN_1, you could name your queues like workerrole1-in-0, worker-in-1 and so on). Just thinking out loud :)