Azure WebJob Queues based on project environments - azure

I have single azure storage account for webjob to manage web job queues but I have multiple environments. So when working with dev or dev-feature all queues are created in single storage account they are picked by dev or dev-feature env any of them. so need to create queues based on environment and also receive them based on environment. As queue names are constant, so not able to receive them on environment basis in webjob project.

Related

One Azure Function in one repo deployed in multiple Azure App Services

Can I deploy one Azure time trigger function from one repo to multiple App Services?
Example:
Currently I have a repo with one Azure function in it (name Function1, runs every few mins).
I have 5 customers, I have a database for each customer and therefore I have 5 connection strings. Each customer requires me to host the function in isolated environment independent from the other customers.
The function "Function1" does the same logic for each of my customers. It just accesses a different database for each using the different connection string.
Therefore, I created 5 App Services: Function1-Customer1, Function1-Customer2, ... to satisfy the "independent environment requirement".
Each App Service has the unique db connection string assigned in the App Settings.
I tried to deploy the "Function1" to all these 5 App Services. However, when then going to see the Log Stream for any of the App Services it seems that only one instance of that function is running, depending on which App Service deployed last.
So for example, if Function1-Customer1 deployed last and I go to Function1-Customer2 or Function1-Customer3 to see the Log Streams, both outputs a conn string of Function1-Customer1. If Function1-Customer2 deployed last then I would see its conn string in all other App Services.
Is it possible to deploy the Function1 to serve all these 5 App Services? Or do I need a different architecture here?
The functions coordinate by obtaining leases in the underlying blob storage. If two function apps end up fighting over the same lease, they will block each other even though they are supposed to do different things. You can explore this by looking at the blobs in the underlying storage account an check the "lease" status.
Based on our discussion in the comments, I would recommend to use a dedicated storage account for each function app. I would not recommend AzureFunctionsWebHost__hostid or similar solutions, since it adds more complexity.
For each trigger Azure function manages it's own queue in Azure Queue Storage. You can use single function app and trigger 5 different tasks for each customer or you can create separate Azure storage account for each function app.

Azure Trigger Function does not schedule in Slots (preview)

I have a function app with three Timer Trigger functions in it. I want to use the staging/production functionality provided by the Slots (preview), so I set up the VSTS deployment for two separate branches. The primary function app polls master and the slot polls a branch called staging.
The problem is that when I start the function app, the main functions schedule and run, but the Slot functions don't seem to get scheduled to run at all. Things I've tried:
I set the hosts.json file for each with a separate 'id' field to avoid a conflict on the locks that determine whether or not they can run. Looking in the storage account, I can see a folder for each app (the main and the slot) and a folder for each function, which I think means they shouldn't be using the same locks.
Use a separate storage account for the Slot app
Stop the main function app while keeping the Slot app running
Can anyone tell me what might be wrong with my setup or if there's a known bug with Slots (preview) preventing this from working?

WebJobs in multiple deployments slots and StorageAccounts

I am trying web-jobs using deployment slots (dev, qa, prod). It seems like each slot seems to need its own storage-account, as when I try and use the same storage account, the jobs (its a time based job) runs only in one of the slots. In all the others, the job never fires.
I have tried provisioning another storage account and when I set the connections strings (AzureWebJobsDashboard, AzureWebJobsStorage) in QA, the jobs begin firing (which seems to prove that I need multiple storage accounts). But I dont want to have to pay for 3 separate storage accounts, where the dev and qa will not be used all that much. Is there any setting that can be used that will allow one storage-account to be used by different slots?
You don't need a different storage account, but you do need to set a different HostId on the JobHostConfiguration object. You can use an environment variable to read it, and then set that to different values as a Slot App Setting in each of your slots.
See https://github.com/Azure/azure-webjobs-sdk-extensions/issues/56 for related info.

Running an exe in azure at a regular interval

I have an app (.exe) that picks up a file and imports it into a database. I have to move this set up into Azure. I am familiar with Azure SQL and Azure File Storage. What I am not familiar with is how I execute am app within Azure.
My app reads rows out of my Azure database to determine where the file is (in Azure File Storage) and then dumps the data into a specified table. I'm unsure if this scenario is appropriate for Azure Scheduler or if I need an App Service to set up a WebJob.
Is there any possibility I can put my app in a directly in Azure File Storage and point a task to that location to execute it (then it might be easier to resolve the file locations of the files to be imported).
thanks.
This is a good scenario for Azure Functions, if you want to just run some code on a schedule in Azure.
Functions are like Web Jobs (they share the same SDK in fact) so you can trigger on a schedule or from a storage queue, etc., but you don't need an app service to run your code in. There are some great intro videos here Azure Functions Documentation , and here is a link to a comparison of the hosting options between web jobs, functions, flow and logic apps.
You can edit the function directly in the portal (paste/type your c# or node.js code straight in), or use source control to manage it.
If you really want to keep your app as an exe and run it like that, then you will need to use the azure scheduler to do this instead, which is a basic job runner.
Decisions, decisions...!
Looking at https://azure.microsoft.com/en-gb/documentation/articles/scheduler-intro/ it seems that the only actions that are supported are:
HTTP, HTTPS,
a storage queue,
a service bus queue,
a service bus topic
so running a self contains .exe or script doesn't look to be possible.
Do you agree?

Technique for handling ServiceBus Topic subscribers running in Azure staging when using ReceiveMode.ReceiveAndDelete

We have a number of topics in the Azure SB and constantly update our environments through a VIP swap from staging to production.
When an instance is running in staging we don't want the subscribers to read and delete messages intended to send events to our instances running in the production slot.
The Solution I have come up with is to create subscriptions that include the RoleEnvironment.SubscriptionId in the name. These are then deleted during RoleEntryPoint.OnStop() to avoid unused subscriptions.
Is there a more elegant solution to this and am I missing something obvious?
One approach is to have a configuration setting that your application understands. It can then be changed between staging/production environments and the same config value can be used to enable/disable things you do not want in production. For Service Bus you can create a Staging and a Production namespace and then put the url in config.

Resources