I am trying web-jobs using deployment slots (dev, qa, prod). It seems like each slot seems to need its own storage-account, as when I try and use the same storage account, the jobs (its a time based job) runs only in one of the slots. In all the others, the job never fires.
I have tried provisioning another storage account and when I set the connections strings (AzureWebJobsDashboard, AzureWebJobsStorage) in QA, the jobs begin firing (which seems to prove that I need multiple storage accounts). But I dont want to have to pay for 3 separate storage accounts, where the dev and qa will not be used all that much. Is there any setting that can be used that will allow one storage-account to be used by different slots?
You don't need a different storage account, but you do need to set a different HostId on the JobHostConfiguration object. You can use an environment variable to read it, and then set that to different values as a Slot App Setting in each of your slots.
See https://github.com/Azure/azure-webjobs-sdk-extensions/issues/56 for related info.
Related
I have single azure storage account for webjob to manage web job queues but I have multiple environments. So when working with dev or dev-feature all queues are created in single storage account they are picked by dev or dev-feature env any of them. so need to create queues based on environment and also receive them based on environment. As queue names are constant, so not able to receive them on environment basis in webjob project.
Let's say I have an azure app service with 4 slots used for production, each one with a 25% traffic.
Is it possible to have just one pre slot and swap it to the four production slots? Or how would you achieve this? We've thought of having 4 pre slots, one for every production slot, but seems a bit of a mess, surely there's a better option to have multiple production slots and the benefits of swapping...
The concept of slot.
Set up staging environments in Azure App Service
Simply put, when creating a webapp, the default webapp belongs to the Production slot. At this time, we can create other slots.
Under normal circumstances, a test slot is usually created, and the test slot is updated with the latest program, so that the latest program can be deployed during swap. Generally, when the program has new functions, it is used for some users to distribute traffic. When there are different versions of the program that need to be run at the same time, then you can create the same 4 slots as you.
But your 4 slots, you want to achieve the purpose of updating all 4 slots through a swap operation. This is really strange. This is why CSharpRocks asks you in conmment. Because your 4 slots are in the same app services plan, creating 4 slots will not improve performance.
You may be thinking of multiple instances to extend your webapp, which can improve the performance of your webapp.
We have written a high scalable Cloudservice for MS Azure with two roles: "WebsiteRole" and "WebsiteWorkerRole". For better performance we deploy this Cloudservice in multiple regions (2x US, 2x EU, 1x JP). We have different configuration files for each region (EuWestProductive.azurePubxml, ServiceConfiguration.CloudEuWest.cscfg, Web.ReleaseEuWest.config).
Now the Problem: In each Region we have running the "WebsiteRole" and "WebsiteWorkerRole". But the "WebsiteWorkerRole" has only very small tasks, so that one extra small instance in one region is more than enough.
We tried to set the Role instance count to zero (ServiceConfiguration.CloudEuWest.cscfg). But this is not allowed:
Azure Feedback: Allow a Role instance count of 0
Is there an other way to remove a role when deploy the Cloudservice?
No, as you've discovered, a cloud service does not allow for scale to zero. You have to effectively remove the deployment. To have the minimum change to what you already have in place you could separate the two roles into two different deployments. Then have an Azure Automation Script, or set of scripts run elsewhere, that handles deploying the worker role when needed and decommissioning when it's not needed.
Depending on the type of workload that worker is doing you could also look at taking another route of using something like Azure Automation to perform the work. This is especially true if it's a small amount of processing that occurs only a few times a day. You're charged by the minute for the automation script, so just make sure it's going to run less than the actual current instance does.
It really boils down to what that worker is doing, how much processing it really needs to do, how much resources it needs and how often it needs to be running. There are a lot of options, such as Azure Automation, another thread on the web role, a separate cloud service deployment, etc. Each with their own pros and cons. One option might even to look at the new Azure Functions they just announced (in preview and charged by the execution).
The short answer is separate the worker from the WebSiteRole deployment, then decide the best hosting mechanism for that worker role making sure that the option includes the ability to only run when you need it to.
Thanks #MikeWo, your idea to separate the deployments was great!
I have verified this with an small example project and it works just fine. Now it is also possible to change the VM size and other configurations per region.
(Comments do not allow images)
After swapping the latest azure deployment from staging to production, I need to prevent the staging worker role from accessing the queue messages. I can do this by detecting if the environment is staging or production in code, but can anyone tell me if there is a any other way to prevent staging environment from accessing and processing queue messages??
Thanks for the help!
Mahesh
There is nothing in the platform that would do this. This is an app/code thing. If the app has the credentials (for example, account name and key) to access the queue, then it is doing what it was coded to do.
Have your staging environment use the primary storage key and your production environment use the secondary storage key. When you do the VIP swap you can regenerate the storage key that your now-staging environment is using which will result in it no longer having credentials to access the queue.
Notice that this does introduce a timing issue. If you do the swap first and then change the storage keys then you run the risk of the worker roles picking up messages in between the two operations. If you change the keys first and then do the swap then there will be a second or two where your production service is no longer pulling messages from the queue. It will depend on what your service does as to whether or not this timing issue is acceptable to you.
You can actually detect which Deployment Slot that current instance is running in. I detailed how to do this here: https://stackoverflow.com/a/18138700/1424115
It's really not as easy as it should be, but it's definitely possible.
If this is a question of protecting your DEV/TEST environment from your PRODUCTION environment, you may want to consider separate Azure subscriptions (one for each environment). This guide from Patterns and Practices talks about the advantages of this approach.
http://msdn.microsoft.com/en-us/library/ff803371.aspx#sec29
kwill's answer of regenerating keys is a good one, but I ended up doing this:
Optional - stop the production worker role from listening to the queue by changing an appropriate configuration key which tells it to ignore messages, then rebooting the VM (either through the management portal or by killing the WaHostBootstrapper.exe)
Publish to the staged environment (this will start accessing the queue, which is fine in our case)
Swap staged <-> production via Azure
Publish again, this time to the new staged environment (old live)
You now have both production and staging worker roles running the latest version and servicing the queue(s). This is a good thing for us, as it gives us twice the capacity, and since staging is running anyway we may as well use it!
It's important that you only use staging as a method of publishing to live (as it was intended) - create a whole new environment for testing/QA purposes, which has its own storage account and message queues.
We have a number of topics in the Azure SB and constantly update our environments through a VIP swap from staging to production.
When an instance is running in staging we don't want the subscribers to read and delete messages intended to send events to our instances running in the production slot.
The Solution I have come up with is to create subscriptions that include the RoleEnvironment.SubscriptionId in the name. These are then deleted during RoleEntryPoint.OnStop() to avoid unused subscriptions.
Is there a more elegant solution to this and am I missing something obvious?
One approach is to have a configuration setting that your application understands. It can then be changed between staging/production environments and the same config value can be used to enable/disable things you do not want in production. For Service Bus you can create a Staging and a Production namespace and then put the url in config.