Azure functions ActivityTrigger and HttpTrigger lock - multithreading

Can I safely use lock{} inside Azure AcitityTrigger and HttpTrigger functions or I have to work it around?

While I don't see why it shouldn't work, note that this would work only in the context of a single instance and won't hold when your app scales out.
Across function app instances, you will have to go for a distributed lock like using storage blob leases.
Also, it's better to leverage a queue (like Service Bus) with appropriate concurrency config to ensure not hitting timeouts.

Related

What is the difference between SingletonScope. Function & Host in Azure WebJob

What is the difference between SingletonScope? Function & Host in Azure WebJob?
I have Time trigger hosted in two regions in AKS. I want to ensure that only one instance runs every time.
All my instances are sharing the same storage account. Can I use one of the above settings to configure my scenario?
Difference between WebJob host and Azure function:
The host is a runtime container for functions. The Host listens for triggers and calls functions.
In version 3.x, the host is an implementation of IHost.
In version 2.x, you use the JobHost object.
You create a host instance in your code and write code to customize its behavior.
This is a key difference between using the WebJobs SDK directly and using it indirectly through Azure Functions.
In Azure Functions, the service controls the host, and you can't customize the host by writing code.
Azure Functions lets you customize host behavior through settings in the host.json file.
For more information, see Compare the WebJobs SDK and Azure Functions
What is Singleton?
The Singleton attribute ensures that only one instance of a function runs, even when there are multiple instances of the host web app. The Singleton attribute uses distributed locking to ensure that one instance runs.
As in below example, only a single instance of the ProcessImage function runs at any given time:
[Singleton]
public static async Task ProcessImage([BlobTrigger("images")] Stream image)
{
// Process the image.
To learn more about how the SingletonMode.Function works see: https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/SingletonMode.cs
Further you can specify a scope expression/value on a singleton. The expression/value ensures that all executions of the function at a specific scope will be serialized.
Check this link for example: https://learn.microsoft.com/en-us/azure/app-service/webjobs-sdk-how-to#scope-values

Azure Container Instances vs Azure Functions

When would I prefer Azure Functions to Azure Container Instances, considering they both offer the possibility to perform run-once tasks and they bill on consumption?
Also, reading this Microsoft Learn Module:
Serverless compute can be thought of as a function as a service (FaaS), or a microservice that is hosted on a cloud platform.
Azure Functions is a platform that allows you to run plain code (instead of containers). The strength of Azure Functions is the rich set of bindings (input- and output bindings) it supports. If you want to execute a piece of code when something happen (e. g. a blob was added to a storage Account, a timer gets triggered, ....) then I definitely would go with Azure Functions.
If you want to run some container-based workload for a short period of time and you don't have an orchestrator (like Azure Kubernetes Services) in place - Azure Container Instances makes sense.
Take a look at this from Microsoft doc
Source: https://learn.microsoft.com/en-us/dotnet/architecture/modernize-with-azure-containers/modernize-existing-apps-to-cloud-optimized/choosing-azure-compute-options-for-container-based-applications
If you would like to simplify application development model where your application architecture has microservices that are more granular, such that various functionalities are reduced typically to a single function then, Azure functions can be considered for usage.
In case, the solution needs some extension to existing azure application with event trigger based use cases , the azure functions can be better choice . Here, the specific code (function) shall be invoked only for specific event or trigger as per requirement and the function instances are created and destroyed on demand (compute on demand - function as a service (FaaS) ).
More often, the event driven architecture is seen in IoT where typically you can define a specific trigger that causes execution of Azure function. Accordingly, Azure functions have its place in IoT ecosystem as well.
If the solution has fast bursting and scaling requirement, then container Instances can be used whereas if the requirement is predictable scaling then, VMs can be used.
Azure function avoids allocation of extra resources (VMs) and also the cost is considered only when the function is processing work. Here, we need not take care of infrastructure such as where the code is going to execute, server configuration, memory etc. For ACI, the cost is per-second where it is accounted based on the time the container runs - CaaS(Container As A Service).
ACI enables for quickly spawning a container for performing the operation and deletion of it when done where the cost is only for few hours of usage rather than a dedicated VM which would be of high cost. ACI enables one to run a container by avoiding dependency on orchestrators like Kubernetes in scenarios where we would not need orchestration functions like service discovery, mesh and co-ordination features.
The key difference is that, in case of Azure function, the function is the unit of work whereas in container instance, the entire container contains the unit of work. So, Azure functions start and end based on event triggers whereas the microservices in containers shall get executed the entire time.
The processing / execution time also plays a critical role where if the event handler function consumes processing time of 10 minutes or more to execute, it is better to host in VM because the maximum timeout that is configurable for functions is 10 minutes.
There are typical solutions that utilize both the functionalities such that Azure function shall be triggered for minimal processing / decision making and inturn can invoke container instance for specific burst processing / complete processing.
Also, ACI along with AKS form a powerful deployment model for microservices where AKS can be for typical deployment of microservices and ACIs for handling the burst workloads thereby reducing the challenges in management of scaling and ensuring effective utilization of the per second usage cost model.

Azure Function TimeTrigger in multiple regions

I want to run a fully redundant failover environment in Azure. I have my webapps covered with an Azure Traffic Manager. I can run my storage queues in two different accounts. All is covered. The only thing I can't figure out is how timetriggers should be handled? I can't have them fire in both regions. Does anyone here have an idea or solution I could try?
Time triggers will run only instance at a time even though you have configured for multiple instances scale out. The timer trigger uses a storage lock to ensure that there is only one timer instance when a function app scales out to multiple instances. If two function apps share the same identifying configuration and each uses a timer trigger, only one timer runs.

Azure EventHubs EventProcessorHost tries to acess Azure storage queue

After enabling app insights on a webjobs which listens for events on an EventHub using the EventProcessor class, we see that it tries continuously to access a set of non-existing queues in the configured blob storage account. We have not configured any queues on this account.
There's no reference to a queue anywhere in my code, and it is my understanding that the EventProcessorHost uses blob storage and not queues in order to maintain state. So: Why is it trying to access queues?
The queue access that you're seeing comes from the JobHost itself, not from any specific trigger type like EventHubs. The WebJobs SDK uses some storage resources itself behind the scenes for its own operation, e.g. control queues to track its own work, blobs for storage of log information shown in the Dashboard, etc.
In the specific case you mention above, those control queues that are being accessed are part of our Dashboard Invoke/Replay/Abort support. We have an open issue here in our repo tracking potential improvements we can make in this area. Please feel free to chime in on that issue.

Is it possible to create a public queue in Windows Azure?

In Windows Azure it's possible to create public Blob Container. Such a container can be accessed by anonymous clients via the REST API.
Is it also possible to create a publicly accessible Queue?
The documentation for the Create Container operation explains how to specify the level of public access (with the x-ms-blob-public-access HTTP header) for a Blob Container. However, the documentation for the Create Queue operation doesn't list a similar option, leading me to believe that this isn't possible - but I'd really like to be corrected :)
At this time, Azure Queues cannot be made public.
As you have noted, this "privacy" is enforced by requiring all Storage API calls made in RE: to queues to be authenticated with a signed request from your key. There is no "public" concept similar to public containers in blob store.
This would follow best practice in that even in the cloud you would not want to expose the internals of your infrastructure to the outside world. If you wanted to achieve this functionality, you could expose a very thin/simple "layer" app on top of queues. A simple WCF REST app in a web role could expose the queuing operations to your consumers, but handle the signing of api requests internally so you would not need the queues to be public.
You are right, the Azure storage queues won't be publicly accessible like the blobs (Uris). However you may still be able to achieve a publicly consumable messaging infrastructure with the appfabric service bus.
I think the best option would be to setup a worker role and provide access to the queue publicly in that manner. Maybe with AppFabric Service Bus for extra connectivity/interactivity with external sources.
? Otherwise - not really clear what the scope might be. The queue itself appears to be locked away at this time. :(

Resources