I'm trying to apply the least privilege principle to an Azure Function. What I want is to make a FunctionApp have only read access to a, for example, storage queue. What I've tried so far is:
Enable managed identity in the FunctionApp
Create a role that only allows read access to the queues (role definition below)
Go to the storage queue IAM permissions, and add a new role assignment, using the new role and the Function App.
But it didn't work. If I try to write to that queue from my function (using an output binding) the item is written, when I expected a failure. I've tried using the builtin role "Storage Queue Data Reader (Preview)" with the same result.
What's the right way to add/remove permissions of a Function App?
Role definition:
{
"Name": "Reader WorkingSA TestQueue Queue",
"IsCustom": true,
"Description": "Read TestQueue queue on WorkingSA storage accoung.",
"actions": ["Microsoft.Storage/storageAccounts/queueServices/queues/read"],
"dataActions": [
"Microsoft.Storage/storageAccounts/queueServices/queues/messages/read"
],
"notActions": [],
"notDataActions": [],
"AssignableScopes": [
"/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourceGroups/TestAuth-dev-rg"
]
}
#anirudhgarg has pointed the right way.
The managed identity and RBAC you set makes difference only when you use managed identity access token to reach Storage service in Function app. It means those settings have no effect on function binding as it internally connects to Storage using connection string. If you haven't set connection property for the output binding, it leverages the AzureWebJobsStorage app settings by default.
To be more specific, connection string has nothing to do with Azure Active Directory Authentication process so it can't be influenced by AAD configuration. Hence if a function takes advantage of Storage Account connection string(e.g. uses Storage related binding), we can't limit its access with other settings. Likewise, no connection string usage means no access.
Update for using SAS token
If the queue mentioned is used in a Queue Trigger/input binding, we can restrict function with read and process(get message then delete)access, here comes SAS token.
Prerequisite:
Queue locates at Storage account other than the one specified by AzureWebJobsStorage app setting. AzureWebJobsStorage requires connection string offering full access with Account key.
Function app is 2.0. Check it on Function app settings> Runtime version: 2.xx (~2). In 1.x it requires more permissions like AzureWebJobsStorage.
Then get SAS token on portal as below and put it in app settings.
Related
I have a question regarding permissions for ADLS Gen2
short description:
I have a Gen2 storage account and created a container.
Folder Structure looks something like this
StorageAccount1
--->Container1
--->Folder1
--->Files 1....n
Also i have a service principal from a customer..
Now i have to provide the customer write only permission to only Folder1 (should not be able to delete files inside Folder1)
I have assigned the service principle below permissions in the Access control list
Container1 --> Execute
Folder1 --> Write , Execute
with this the customer can now put data into this Folder1.. but how do i prevent him from deleting any files inside it? ( i dont wanna use SAS )
Or is there any other way other than ACL?
Please help :)
ACL for ADLSgen2
Please check if below can be worked.
ACLs are the way to control access to be applied on the file and
folder level unlike others which are for container level.
Best practice is to always restrict access using (Azure RBAC)
on the Storage Account/Container level has several Azure built-in
roles that you can assign to users, groups, service principals, and
managed identities and then combine with ACLs with more restrictive
control on the file and folder level.
Ex: Storage Blob Data Contributor has read/write/delete permission .Assign an Azure role
If the built-in roles don't meet the specific needs of your
organization, you can create your own Azure custom roles.
Reference
To assign roles, you must be assigned a role that has role assignments write permission, such as Owner or User Access Administrator at the scope you are trying to assign the role.
To create a custom role with customized permissions.
Create a new file C:\CustomRoles\customrole1.json like example below. The ID should be set to null on initial role creation as a
new ID is generated automatically.
{
"Name": "Restrict user from delete operation on Storage",
"ID": null,
"IsCustom": true,
"Description": "This role will restrict the user from delete operation on the storage account. However, customer will be able to see the storage account, container, blob.",
"Actions": [
"Microsoft.Storage/storageAccounts/read",
"Microsoft.Storage/storageAccounts/blobServices/containers/read",
"Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action"
],
"NotActions": [
"Microsoft.Storage/storageAccounts/blobServices/containers/delete"
],
"DataActions": [
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action"
],
"NotDataActions": [
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete",
],
"AssignableScopes": [
"/subscriptions/dxxxx7-xxxx"
]
}
Using the above role definition, by running the below powershell script to create a custom role:
New-AzRoleDefinition -InputFile "C:\CustomRoles\customrole1.json"
see below References: for details.
How to restrict the user from upload/download or delete blob in
storage account - Microsoft Tech Community
Azure built-in roles - Azure RBAC | Microsoft Docs
Also try to enable soft delete to restore delete actions if the role has delete permissions .
Though mentioned to not use .Just in case .Shared access signature (SAS) can be used to restrict access to blob container or an individual blob. Folder in blob storage is virtual and not a real folder. You may refer to the suggestion mentioned in this article
References
permissions-are-required-to-recursively-delete-a-directory-and-its-contents
Cannot delete blobs from ADLS gen2 when connected via Private Endpoint - Microsoft Q&A
Authorize with Azure Active Directory (REST API) - Azure Storage | Microsoft Docs
The requirement is to create access package with few roles so that the users can perform below activities:
Read & write access to data stored in a given blob container ('abc' blob container).
Role to access azure data factory to build pipeline, process & load the data to a staging area (to Blob container or SQL server).
DDL & DML and execute permission role to access the data/database in SQL server environment.
I was referring Azure RBAC and built-in-roles but unable to get clear idea considering the above points.
My question is, is there any build in roles there OR do I need to create the custom role? And, how to create custom role (for above requirements) considering baseline security?
Is there any ways, can I get additional actions by referring which I can write custom JSON scripts?
My question is, Is the RBAC roles possible for SQL Server in a VM? If yes, how?
Additionally, if I have both PaaS instance of SQL Server and VM instance of SQL Server (that is, SQL Server in VM) - how the RBAC roles will be managed for both?
According to your requirements, please go through below workarounds if they are helpful:
Read & write access to data stored in a given blob container (‘abc'
blob container).
You can make use of built-in role like Storage Blob Data Contributor which allows operations like read, write and delete Azure Storage containers and blobs. If you want to know more in detail, go through this reference.
Role to access azure data factory to build pipeline, process & load
the data to a staging area (to Blob container or SQL server).
You can make use of built-in role like Data Factory Contributor which allows operations like create and manage data factories, as well as child resources within them. Those child resources include pipelines, datasets, linked services… With this role, you can build pipeline, process and load the data. If you want to know more in detail, go through this reference.
DDL & DML and execute permission role to access the data/database in
SQL server environment.
You can make use of built-in role like SQL Server Contributor which allows operations like manage SQL Servers and Databases. If you want to know more in detail, go through this reference.
If you want to create a custom role for all these, make sure you have Owner or User Access Administrator role on the subscription. You can create a custom role in 3 ways:
Clone a role – You can make use of existing roles and modify the permissions by adding and deleting them according to your need.
Start from scratch – In this, you must add all permissions you need manually by picking them from their providers and excluding the permissions you don’t need.
Start from JSON – Here, you can just upload a JSON file where you can create separately by including all needed permissions in Actions variable whereas excluded permissions in notActions variable. If the permissions are related to data, then add them to DataActions and notDataActions based on your need. In Assignable scope, you can include the scope where the role should be available i.e., subscription or resource group as per need.
Considering baseline security, it is always suggested to give read permissions only. But as you need write permission for blob container and building pipeline, you can just add only those(read/write) in Actions section and remaining all(delete) in NotActions section.
If you want to add additional actions, simply include those permissions in Actions section in JSON file and make sure to give read permissions to resource groups.
A sample custom role JSON file for your reference:
{
"assignableScopes": [
"/"
],
"description": "Combining all 3 requirements",
"id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/***************************",
"name": "**********************",
"permissions": [
{
"actions": [
"Microsoft.Authorization/*/read",
"Microsoft.Resources/subscriptions/resourceGroups/read",
"Microsoft.ResourceHealth/availabilityStatuses/read",
"Microsoft.Resources/deployments/*",
"Microsoft.Storage/storageAccounts/blobServices/containers/read",
"Microsoft.Storage/storageAccounts/blobServices/containers/write",
"Microsoft.DataFactory/dataFactories/*",
"Microsoft.DataFactory/factories/*",
"Microsoft.Sql/locations/*/read",
"Microsoft.Sql/servers/*",
],
"notActions": [
"Microsoft.Storage/storageAccounts/blobServices/containers/delete",
"Microsoft.Sql/servers/azureADOnlyAuthentications/delete",
"Microsoft.Sql/servers/azureADOnlyAuthentications/write"
],
"dataActions": [
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action"
],
"notDataActions": []
}
],
"roleName": "Custom Role Contributor",
"roleType": "CustomRole",
"type": "Microsoft.Authorization/roleDefinitions"
}
Reference:
Azure custom roles - Azure RBAC | Microsoft Docs
When I develop for Azure I usually start copying in some keyvault client code so only keyvault urls will be in my settings file, no secrets can ever end up my git repositories.
After starting to make Azure functions I realized that it was not possible to do this for the trigger connection string for e.g. service bus or blob storage.
The recommended approach seems to connect the app to keyvault directly in Azure when deployed, and just manage secrets locally in Secret Manager, like suggested in
this article
I am not developing alone, so while I am not adverse to using a tool like Secret Manager, I need to still have my offline secrets connected to the Azure keyvault! If others change anything.
Question: How do I manage secrets offline in a way that is synchronized with Azure keyvault?
it was not possible to do this for the trigger connection string for e.g. service bus or blob storage.
In short, it's possible.
Here are steps you could follow and refer to the detailed article.
1.Add a System Assigned Managed Identity to the Azure Function.
2.Go to the Access Control section of your Key Vault and click on Add a role assignment blade.
3.Go to your Key Vault and click on Access Policies and then click on Add service principal with secret GET permission.
4.When you use ServiceBusTrigger, you set ServiceBusConnectionString in Function ->Configuration ->Application settings.
public static void Run([ServiceBusTrigger(_topicName, _subscriptionName, Connection = "ServiceBusConnectionString")] string mySbMsg, ILogger log)
{ ....
}
5.Now you change the value of ServiceBusConnectionString to the Azure Key Vault reference with #Microsoft.KeyVault(SecretUri=Secret URI with version). Then you could run your function successfully with Key Vault.
I was wondering if it's possible to initialize the queue trigger or even the blob trigger off a connection string that is read from azure vault.
Right now, we have to set these data connection via environment settings via blade properties. However, I wanted to just use the service principal to retrieve the token for the azure key vault to get all these connection strings.
I'm trying to figure how to get this working in java.
Thanks,
Derek
This feature is tracked and in progress here:
Feature request: retrieve Azure Functions' secrets from Key Vault
Add binding to Key Vault
EDIT 28/11/2018: It is currently in preview
Simplifying security for serverless and web apps with Azure Functions and App Service
Former answer 07/10/2018
This solution won't work for Triggers using the consumption plan.
In the mean time I did some research about your problem and it is possible to read config from key vault if you use Azure Function v2.
I've created an Azure Functions v2 (.NET Standard) from Visual Studio.
It uses:
NETStandard.Library v2.0.3
Microsoft.NET.Sdk.Functions v1.0.22
Microsoft.Azure.WebJobs v3.0.0
Microsoft.Azure.WebJobs.Extensions.Storage v3.0.0
Because Azure Functions v2 uses ASP.NET core, I was able to reference this link to configure my functions app to use Azure Key Vault:
Azure Key Vault configuration provider in ASP.NET Core
I've added this nuget package:
Microsoft.Extensions.Configuration.AzureKeyVault
I've configured my app to use this nuget package:
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using System.Linq;
[assembly: WebJobsStartup(typeof(FunctionApp1.WebJobsExtensionStartup), "A Web Jobs Extension Sample")]
namespace FunctionApp1
{
public class WebJobsExtensionStartup : IWebJobsStartup
{
public void Configure(IWebJobsBuilder builder)
{
// Get the existing configuration
var serviceProvider = builder.Services.BuildServiceProvider();
var existingConfig = serviceProvider.GetRequiredService<IConfiguration>();
// Create a new config based on the existing one and add kv
var configuration = new ConfigurationBuilder()
.AddConfiguration(existingConfig)
.AddAzureKeyVault($"https://{existingConfig["keyVaultName"]}.vault.azure.net/")
.Build();
// replace the existing configuration
builder.Services
.Replace(ServiceDescriptor.Singleton(typeof(IConfiguration), configuration));
}
}
}
My Azure functions uses MSI:
I've granted Read/List secrets permissions to the function app on my key vault:
I have a small queue triggered function:
public static class Function2
{
[FunctionName("Function2")]
public static void Run([QueueTrigger("%queueName%", Connection = "queueConnectionString")]string myQueueItem, ILogger log)
{
log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
}
}
The queueName is defined in the local.settings.json file (App settings blade once deployed):
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet",
"keyVaultName": "thomastestkv",
"queueName": "myqueue"
}
}
The queueConnectionString is configured in my keyvault:
Sourcing Application Settings from Key Vault
The Key Vault references feature makes it so that your app can work as if it were using App Settings as they have been, meaning no code changes are required. You can get all of the details from our Key Vault reference documentation, but I’ll outline the basics here.
This feature requires a system-assigned managed identity for your app. Later in this post I’ll be talking about user-assigned identities, but we’re keeping these previews separate for now.
You’ll then need to configure an access policy on your Key Vault which gives your application the GET permission for secrets. Learn how to configure an access policy.
Lastly, set the value of any application setting to a reference of the following format:
#Microsoft.KeyVault(SecretUri=secret_uri_with_version)
Where secret_uri_with_version is the full URI for a secret in Key Vault. For example, this would be something like: https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931
That’s it! No changes to your code required!
For this initial preview, you need to explicitly set a secret version, as we don’t yet have built-in rotation handling. This is something we look forward to making available as soon as we can.
User-assigned managed identities (public preview)
Our existing support for managed identities is called system-assigned. The idea is that the identity is created by the platform for a specific application and is tied to the lifecycle of the application. If you delete the application, the identity is removed from Azure Active Directory immediately.
Today we’re previewing user-assigned identities, which are created as their own Azure resource and then assigned to a given application. A user-assigned identity can also be assigned to multiple applications, and an application can have multiple user-assigned identities.
more details check this
Update
This is now GA
This was just released as preview a couple days ago.
This feature requires a system-assigned managed identity for your app. Later in this post I’ll be talking about user-assigned identities, but we’re keeping these previews separate for now.
You’ll then need to configure an access policy on your Key Vault which gives your application the GET permission for secrets. Learn how to configure an access policy.
Lastly, set the value of any application setting to a reference of the following format:
#Microsoft.KeyVault(SecretUri=secret_uri_with_version)
Where secret_uri_with_version is the full URI for a secret in Key Vault. For example, this would be something like: https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931
Using Keyvault integration within the function runtime
I just implemented it in Java following below two references.
https://learn.microsoft.com/en-us/azure/app-service/app-service-key-vault-references
https://medium.com/statuscode/getting-key-vault-secrets-in-azure-functions-37620fd20a0b
in java use System.getenv("SECRET_KEY") to read the values from your app settings.
Happy to help if you need further assistance.
I have already given my answer in above, this answer is for #Matt Sanders's comment,
i just want to explain how MSI working in the Azure Environment,
"I have 2 user assigned identities, 1 has permissions to KeyVault, the other does not. How can you specify the correct user assigned an identity to use for retrieving the secret? I'm guessing this is not possible and User Assigned Identities are not supported even though they are listed in your answer. – Matt Sanders"
when you want to use Azure Manage Identity Service, your application must register in the Azure AD, for an example, lets say multiple users accessing your web application and, within your web application, you 'r trying to access vVault's secrets, In that case, vault doesnt care about the users that consume your application, it cares about the application,
please reffer below image,
I as showing the picture, only Azure Function added as an Identity to the vault, other applications are not,
so whoever using Azure function can access vault's secrets, according to this example only A and B can access secrets,
I'm trying to set up a function to take a snapshot of a blob container every time a change is pushed to it. There is some pretty simple functionality in Azure Functions to do this, but it only works for general purpose storage accounts. I'm trying to do this with a blob only storage account. I'm very new to Azure so I may be approaching this all wrong, but I haven't been able to find much helpful information. Is there any way to do this?
As #joy-wang mentioned, the Azure Functions Runtime requires a general purpose storage account.
A general purpose storage account is required to configure the AzureWebJobsStorage and the AzureWebJobsDashboard settings (local.settings.json or Appsettings Blade in the Azure portal):
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "my general purpose storage account connection string",
"AzureWebJobsDashboard": "my general purpose storage account connection string",
"MyOtherStorageAccountConnectionstring": "my blob only storage connection string"
}
}
If you want to create a BlobTrigger Function, you can specify another connection string and create a snapshot everytime a blob is created/updated:
[FunctionName("Function1")]
public static async Task Run([BlobTrigger("test-container/{name}",
Connection = "MyOtherStorageAccountConnectionstring")]CloudBlockBlob myBlob,
string name, TraceWriter log)
{
log.Info($"C# Blob trigger function Processed blob\n Name:{name}");
await myBlob.CreateSnapshotAsync();
}
In the Visual Studio:
I have tried to create snapshot for a blob-only storage
named joyblobstorage , but it failed. I supposed you should get the same error in the screenshot.
As the error information says Microsoft.Azure.WebJobs.Host: Storage account 'joyblobstorage' is of unsupported type 'Blob-Only/ZRS'. Supported types are 'General Purpose'.
In the portal:
I try to create a Function App and use the existing Storage, but it could not find my blob-only storage account. Azure Function setup in portal should not allow we to select a blob-only storage account. Please refer to the screenshot.
Conclusion:
It is not possible to create snapshot for a blob-only storage. In the official documentation, you could see the Storage account requirements.
When creating a function app in App Service, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage.
Also, in the App settings reference, you could see
AzureWebJobsStorage
The Azure Functions runtime uses this storage account connection string for all functions except for HTTP triggered functions. The storage account must be a general-purpose one that supports blobs, queues, and tables.
AzureWebJobsDashboard
Optional storage account connection string for storing logs and displaying them in the Monitor tab in the portal. The storage account must be a general-purpose one that supports blobs, queues, and tables.
Here is the Feedback, Azure App Service Team has explained the requirements on storage account, you could refer to it.