Python Azure Timer Function with Durable Entity - python-3.x

I'm currently trying to work on my first Azure Function to act as an intermediary between Defender for Endpoints, and Azure Sentinel. It runs every 5 minutes, and collects data matching specific filters from the Defender API to then forward as custom logs to Azure Sentinel. Due to the authentication measures in place on Defender, I've set my script up using ADAL to do a device code logon the first time, then use the refresh tokens to do its scheduled running.
This is where I've come across the problem; since Azure Functions are serverless by design, holding this refresh token somewhere for the next invocation has proven troublesome. I'm trying to use Durable Functions, but the documentation for such a use case seems non-existent.
Are there other appropriate methods to store a singular variable across invocations of an Azure Function?

There are more than one way to solve the problem you are facing with holding the refresh token for every new invocation Azure Functions.
One way to solve the problem is by using a Azure Function Timer Trigger to request new access tokens and Azure Key Vault to store these tokens securely. We want to save them in Key Vault so the next time we invoke our function to refresh our tokens again, we will use the updated values and the next function will be able to obtain that value when invocated.
Check this document to access secrets from key vault.
Another way is enabling the token store in azure function and store it in blob storage. Check this document for more information.

Related

.NET Core Dependency Injection and services that utilize frequently rotated authorization keys

Issue Summary
I have multiple ASP.NET Core applications that connect to Azure resources such as CosmosDB, Azure Storage Queues, Azure Event Hubs, etc. All of these resources can utilize Shared Access Signature (SAS) tokens for authentication. These tokens expire which presents a problem when my application starts up and initializes the service once upon startup via services.AddSingleton<T>() (or a similar option).
For example, what I typically do is read the SAS token from a file upon startup (likely mounted to my pod as a volume in Kubernetes but I am not sure that's terribly relevant). That SAS token is then provided to an Azure Storage Queue Client constructor, like this:
string sharedAccessSignature = File.ReadAllText(pathToSasToken);
services.AddSingleton<Azure.Storage.Queues.QueueClient>((sp) =>
{
return new Azure.Storage.Queues.QueueClient(queueUri,
new AzureSasCredential(sharedAccessSignature),
new Azure.Storage.Queues.QueueClientOptions()
{
MessageEncoding = Azure.Storage.Queues.QueueMessageEncoding.Base64
});
});
Unfortunately, I think this means once my SAS token expires, my QueueClient will no longer be able to connect to my Azure Storage Queue without restarting my whole application. Somehow, I need to re-read an updated SAS token from my file while I remain running. (I have another process running in my cluster that provides SAS tokens to my pods).
Possible Solutions
I figure the IOptionsMonitor approach could be useful but unfortunately, the SDKs for these clients don't accept an IOptionsMonitor<T> in their constructors so they don't seem to be capable of re-reading new tokens at runtime -- at least not using IOptionsMonitor.
Another approach could be to use Transient or Scoped service lifetimes but that requires I use the same service lifetimes in my whole dependency chain... So if I have a singleton like a HostedService running, I cannot resolve a Transient or Scoped service from that without unpredictable results (AFAIK). (Update 12/31/2021 - This is actually not true. Microsoft provides guidance on how to consume a scoped service in a HostedService which is actually a good example that demonstrates how one can use Scoped services and manage the lifetimes on your own).
I could also just manually re-create my clients as my code is running but that seems to defeat the purpose of using the .NET service provider and DI pattern.
Am I missing an obvious solution to this that I'm just not seeing in Microsoft's documentation?
I think you're missing Managed Identities. Rather than trust on SAS tokens, you assign a Managed Identity to your ASP.NET app, and grant access to this identity to connect to the required services.
Benefits:
no need to redeploy / acquire new SAS token when it changes / expires
external users won't be able to impersonate this identity (if someone get access to the SAS token, they will be able to use it outside the scope of your app)
More info:
https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-managed-identity
https://learn.microsoft.com/en-us/azure/cosmos-db/managed-identity-based-authentication
https://learn.microsoft.com/en-us/azure/stream-analytics/event-hubs-managed-identity
https://cmatskas.com/setting-up-managed-identities-for-asp-net-core-web-app-running-on-azure-app-service/

How to handle a Custom Connector that uses header authentication in each API call?

I have an Azure logic app that uses a Custom Connector that I've made from importing a Postman Collection. The RESTful API that my connector calls require 3 authentication headers in each request: UserName, Secret, ApiIntegrationCode. Because it takes three specifically named parameters, I don't believe that any of the authentication presets will work for me.
I know that I can protect the inputs and outputs of various connectors. I have been entertaining the idea of storing the sensitive information in a SQL table that I query in each run and storing the values in variables that I pass to each of my custom connector's API calls.
Would this be a viable way of protecting sensitive data from being seen by people that may have access to my logic app? What is the most secure way I can pass these headers in each call?
There are not too many options within a (consumption) Logic App in this regard.
Your options with Logic Apps
A first step into the right direction is to put your sensitive information into an Azure Key Vault and use the corresponding connector in your Logic App to retrieve the data from there. This is easier to implement and more secure than querying a SQL table for this purpose.
The second thing you can do is to activate secure inputs for the connectors that make the API calls. This makes sure, that the sensitive information passed to these connectors is obfuscated in the run history of your logic App and in connected services like Azure Log Analytics.
The problem with this approach is, that anyone who has more than just read permissions to your Logic App can just go ahead and deactivate the secure inputs setting or create a step that dumps the content of your Key Vault. You can use RBAC to control access to your Logic App but that means of course administrative overhead.
Alternative: API Management Service
If you want by all means to allow other developers to change the Logic App without exposing API secrets to them, you might consider using some sort of middle tier to communicate with the API. Azure API Management Service (APIM) is one of the options here.
You would manage your sensitive information in a Key Vault and inject them via "Named Values" into your APIM instance. You can then add your API as a backend in APIM and expose it towards your Logic App.
The advantage here is that you can secure access to your API with APIM subscription keys that you can cycle frequently. You can also restrict the access to the original API to only those calls, that need to be available to the Logic App.
If APIM is something for you depends on your use case, as it comes at a price. Even the developer plan costs about $50/month: https://azure.microsoft.com/en-us/pricing/details/api-management/
Alternative: Azure Function
You can use a simple Azure Function that serves as a middle tier between your Logic App and your API. This function can be configured to pull the sensitive data from a Key Vault and can also be secured via a function access key, that you can renew on a regular basis.
This is a dirt cheap option, if you are running the functions on a consumption plan: https://azure.microsoft.com/en-us/pricing/details/functions/

How do I set a secret in a Azure Logic App

Is there a way to set a secret's value and expiration date from with a Logic App?
I don't see the option listed (I scrolled thru the list to ensure that I wasn't missing it):
It is as you suspected, you can not update a secret using the out of the box key vault connector available with Logic Apps. You however can do it using the Keyvault REST reference Update Secret- REST reference
I personally write an azure function to perform various key vault operations as I find there are some operations lacking in the out of the box connector in logic app

Store variable in azure logic app to use in next run

I'm fetching data from API, but I want to fetch data from the last time when logic run app to current time(to reduce redundancy).So where can I store the last date-time so I can use it in API. API provide that feature to pass time but in azure logic app where can I store last date information?
current logic app design
Logic App Designer
Try this, it was helpful to me:
azure-logic-apps-storing-variables-between-runs
For now logic app doesn't support such variable could be shared in different runs.
So if you want to implement this feature, you could use azure storage queue or service bus to do it, create a message to store value you want to use in next run. And every time after getting the value remember to delete the original data, also do put message action to store new data to queue or service bus.
Hope this could help you, if you still have other problem please feel free to let me know.

Call Azure function from azure cloud service without API key

Good day,
we created an Azure function which fetches a secret from a key vault. The idea is not to have the secret in the code, as not every developer should be able to authenticate with the application.
However, now I had to include the azure function's API Key in code, which (for me) seems like adding just another layer to the problem without actually preventing anyone from accessing the secret (the developer would just need to call the function).
I wonder: Is there any possibility to allow calling an Azure Function from a cloud service without an API key? I'd argue this should be possible, as both is living in azure itself. However, I already tried exactly this: Calling the function (from the cloud service) without key, nevertheless this just says forbidden.

Resources