Access Azure WebApp Object From Azure WebJob - azure

We have a Static Class in the WebApp that contains a static dictionary of current sessions and username. We need to have access to the data in the dictionary in the WebJob as we want to update data based on who currently has active sessions. The webJob runs every 5 minutes and needs to have the current list of sessions/users.
I can access the dictionary from the webjob but its always null. We have logging in the webApp that verifies there are entries in the dictionary but when the webjob accesses the dictionary its null.
How can I get that object in the webJob and get its data? Do we need to use Azure Storage (Queue/Table) for this to work?

An "Azure AppService" is hosted on an "AppService Plan", which in turn consists of a number of virtual machines. WebJobs ("your.webjob.exe") and WebApps(usually "w3wp.exe") are completely independent processes on theses systems. They may run on the same machine, but there is no guarantee for it. Either way, communication between them would be difficult and can definitely not be achieved by using a common static variable.
For your use case, you should use a common storage. Azure Storage could work, but Azure Redis Cache or simple SQL might also do the trick. Depends on your framework and requirements.

Related

Uploading data to Azure App Service's persistent storage (%HOME%)

We have a windows-based app service that requires a large dataset to run (files stored on Azure Blob Storage at around ~30GB). This data is static per app version, and therefore should be accessible to all instances across a given slot (a slot in our case represents a version).
Based on our initial research, it seems like Persistent Storage (%HOME%) would be the ideal place for this, since data stored there is shared across instances, but not across slots.
The next step now is to load the required data as part of our devops deployment pipeline, since the app service cannot operate without the underlying data. However, it seems like the %HOME% directory is only accessible by the app service itself, even though the underlying implementation is using Azure Storage.
At this point, we're considering having the app service download the data during its startup, but then we hit a snag which is that we have two instances. We could implement a Mutex (using blob lease) but this seems to us to be too complicated a solution for a simple need.
Any thoughts about how to best implement this?
The problems I see with loading the file on container startup are the following:
It's going to be really slow, and you might hit one of the built-in App Service timeouts.
Every time your container restarts, or you add another instance, it will re-download all the data, and it might cause issues with blocked writes because of file handle locks, which can make files or directories on %HOME% completely unaccessible for reading and modifying (I just had this happen to me).
For this I would rather suggest connecting the app to Azure Files by SMB, and for example have a directory per each version. This way you can connect to Azure Files and write the data during your build pipeline, and save an ENV variable or file that tells each slot which directory to get the current version's data from.

One Azure Function in one repo deployed in multiple Azure App Services

Can I deploy one Azure time trigger function from one repo to multiple App Services?
Example:
Currently I have a repo with one Azure function in it (name Function1, runs every few mins).
I have 5 customers, I have a database for each customer and therefore I have 5 connection strings. Each customer requires me to host the function in isolated environment independent from the other customers.
The function "Function1" does the same logic for each of my customers. It just accesses a different database for each using the different connection string.
Therefore, I created 5 App Services: Function1-Customer1, Function1-Customer2, ... to satisfy the "independent environment requirement".
Each App Service has the unique db connection string assigned in the App Settings.
I tried to deploy the "Function1" to all these 5 App Services. However, when then going to see the Log Stream for any of the App Services it seems that only one instance of that function is running, depending on which App Service deployed last.
So for example, if Function1-Customer1 deployed last and I go to Function1-Customer2 or Function1-Customer3 to see the Log Streams, both outputs a conn string of Function1-Customer1. If Function1-Customer2 deployed last then I would see its conn string in all other App Services.
Is it possible to deploy the Function1 to serve all these 5 App Services? Or do I need a different architecture here?
The functions coordinate by obtaining leases in the underlying blob storage. If two function apps end up fighting over the same lease, they will block each other even though they are supposed to do different things. You can explore this by looking at the blobs in the underlying storage account an check the "lease" status.
Based on our discussion in the comments, I would recommend to use a dedicated storage account for each function app. I would not recommend AzureFunctionsWebHost__hostid or similar solutions, since it adds more complexity.
For each trigger Azure function manages it's own queue in Azure Queue Storage. You can use single function app and trigger 5 different tasks for each customer or you can create separate Azure storage account for each function app.

Azure WebJob: monitor all containers in account

I am developing an azure webjob which is monitoring a blob storage account for new inserted blobs. My storage account consists of multiple containers all holding similar information. Currently I'm using separate BlobTriggers for every container to monitor the single containers.
Is there a way to monitor the whole account for new blobs instead of every single container? If not, can I automatically iterate over the containers in a storage account and call the webjob with the container names as parameter?
No, currently each BlobTrigger monitors for changes on a single container. At startup time, the blob containers indicated by your BlobTrigger annotated functions result in multiple "listeners" being started, monitoring the various containers. So there's no runtime way for you to iterate over containers and set this self up yourself, short of codegen/ILGen of SDK methods with the appropriate attributes.
If you'd like, you can add a feature suggestion here: https://github.com/Azure/azure-webjobs-sdk/issues, and we can review it for the next release. However, I've never heard of anyone else needing this functionality, so it seems pretty corner case :)

Which pieces do or do not persist in an Azure Cloud Service Web Role?

My understanding of the VMs involved in Azure Cloud Services is that at least some parts of it are not meant to persist throughout the lifetime of the service (unlike regular VMs that you can create through Azure).
This is why you must use Startup Tasks in your ServiceDefinition.csdef file in order to configure certain things.
However, after playing around with it for a while, I can't figure out what does and does not persist.
For instance, I installed an ISAPI filter into IIS by logging into remote desktop. That seems to have persisted across deployments and even a reimaging.
Is there a list somewhere of what does and does not persist and when that persistence will end (what triggers the clearing of it)?
See http://blogs.msdn.com/b/kwill/archive/2012/10/05/windows-azure-disk-partition-preservation.aspx for information about what is preserved on an Azure PaaS VM in different scenarios.
In short, the only things that will truly persist are things packaged in your cscfg/cspkg (ie. startup tasks). Anything else done at runtime or via RDP will eventually be removed.
See - How to: Update a cloud service role or deployment - in most cases, an UPDATE to an existing deployment will preserve local data while updating the application code for your cloud service.
Be aware that if you change the size of a role (that is, the size of a virtual machine that hosts a role instance) or the number of roles, each role instance (virtual machine) must be re-imaged, and any local data will be lost.
Also if you use the standard deployment practice of creating a new deployment in the staging slot and then swapping the VIP, you will also lose all local data (these are new VMs).

Web Role local storage URI

I have a local storage folder, called TempStore, set up on my Web Role instances.
Is it possible to expose files as a URI from my local storage?
E.g:
http://myapplication.cloudapp.net/TempStore/helloworld.jpg
I understand that I could use blobs for this, but I would prefer to use local storage in this case.
There is. However I really do not understand the reason for doing this? The only reason I see is some misunderstanding or not fully understanding the capabilities of the Windows Azure Platform Services (Storage, Cloud Service / Web Roles).
You have to know that local storage is not synced between role instances. Also if hardware failure happens, a role healing process will instantiate an entirely new VM with fresh image from your cloud service package. This will lead to an absolutely empty local storage resource. Windows Azure Load Balancer (the thing that sits in front of your web and worker roles, more here) uses Round Robin algorithm. Meaning that even if with one request user uploads file to your web role. The next request (that you will probably want to show preview) might go to another instance that has no idea of user uploaded.
If, after knowing all these facts, you still want to "shoot yourself in the foot" here is the solution:
Implement VirtualPathProvider
register it for desired public URL Path
Use the RoleEnvironment.GetLocalResource method in your VPP to obtain the full path to the local storage resource
don't blame anyone else when you realize this was a mistake ;)

Resources