Storage account FileShare Can I mount as Drive - azure

Hi Have below questions.
I have a storage account and inside storage account, I have file shares.
And below is my folder structure
Root\Account 1
Root\Account 1\ReadOnly
Root\Account 1\ReadAndWrite
Root\Account 2
Root\Account 2\ReadOnly
Root\Account 2\ReadAndWrite
Now my questions are can I map my End users with Root\Account 2\ReadOnly or Root\Account 2\ReadAndWrite as their network-connected shared Drive “z:\”
I was actually trying with https://husseinsalman.com/securing-access-to-azure-storage-part-5-stored-access-policy/ blog post, here What I do not understand is the how to give SAS Signature to mount as a network folder ?

It's not possible to mount the specific directory, however you can set permission to files and directory.You can check Azure Active Directory Domain Services authentication on Azure Files assign permission for the directories
Azure Files identity-based authentication options for SMB access
Configure directory and file level permissions over SMB
If you mount the file share by using SMB, you don't have folder-level control over permissions. However, if you create a shared access signature by using the REST API or client libraries, you can specify read-only or write-only permissions on folders within the share.

Related

How to set up ACL by not using RBAC in ADLS gen2?

Please let me know how did you set up the ACL by not using RBAC. I tried the below steps:
Created a user in Active Directory
In Storage(Gen2) -> IAM -> Gave the reader access to the user
In Storage Explorer - > Right click on the root folder -> manage access - > Giving Read, Write and execute permission.
Still this is not working. I guess since i have given reader role in IAM, ACL is not getting applied.
However if i do not set read access in IAM. User is unable to see the storage account when he is logging to the Azure portal. Please Let me know how shall i apply ACL ?
I have 5 folders. I want to give rwx access to 3 folders for DE team and rx access to DS team.
If you use ACL to access ADLS Gen2 via Azure portal, it is impossible. Because in Azure portal, in default users will use account key to access ADLS Gen2. So users should have permission to list the account key. But ACL cannot do that. For more details, please refer to here. If you want to use ACL, I suggest you use azcopy.
For example
My ADLS Gen2
FileSystem
test
folder
result_csv
I want to list all files in the folder result_csv.
Configure ACL. For more details about ACL, please refer to here
Operation / result_csv/
list /result_csv --x r-x
Test
azcopy login --tenant-id <your tenant>
azcopy list "your url"

How to create multiple users with multiple folders with user defined permission in sftp hosted on Azure container instance

I'm trying to deploy atmoz sftp images for multiple users. I am new to this technology.
Below are the points I have tried.
I took the template from GitHub and deployed it on azure and with the help of the template I'm able to create the two users(users1 and users2).
For users1 I have created the folder1 and for user2 folder2 and I'm able to see the same structure while login into sftp.
For both the folders I have created the different file share.
My requirement is now to show both the folders to both the users but with user defined permission. users1 should have write permission on folder1 and read permission on folder2 and user2 should have write permission on folder2 and only read permission on folder1.
SFTP login for first user i.e user1
Currently, the Azure Container Instance does not support to change the permission when you mount the Azure File Share. And you can see all the users home path are owned by the root user and the root group:
And when you execute the command mount inside the container instance, you can see it like this:
Both file_mode and the dir_mode are set with the permission 0777. And there is no property to change the mount options in the ARM template. So I'm afraid you cannot achieve your purpose.

Folder level access control in ADLS Gen2 for upcoming users

I have a Gen2 storage account and created a container.
Folder Structure looks something like this
StorageAccount
->Container1
->normal-data
->Files 1....n
->sensitive-data
->Files 1....m
I want to give read only access to the user only for normal-data and NOT sensitive-data
This can be achieved by setting ACL's on the folder level and giving access to the security service principle.
But limitation of this approach is user can only access the files which are loaded into the directory after the ACL is set up, hence cannot access the files which are already present inside the directory.
Because of this limitation, new users cannot be given full read access (unless new users use the same service principle, which is not the ideal scenario in my usecase)
Please suggest a read-only access method in ADLS Gen2, where
If files are already present under a folder and a new user is onboarded, he should be able to read all the files under the folder
New user should get access to only normal-data folder and NOT to sensitive-data
PS : There is a script for assigning ACL's recursively. But as I will get close to million records each day under normal-data folder, it would not be feasible for me to use the recursive ACL script
You could create an Azure AD security group and give that group read only access to the read-only folder.
Then you can add new users to the security group.
See: https://learn.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-groups-create-azure-portal

grant read access to non-root user to a mounted bucket

I have mounted a bucket to archive old data. My problem is, a non-root user needs access to these files from time to time. Is there any way to grant her read access to this bucket/mount. She uses Windows if that matters.
You can give her read permissions over the bucket so she can see the files using cloud console, as you are using fuse, locally All files have permission bits 0644, and all directories have permission bits 0755, you can find more information on how to change those fuse permissions here

Grant access to Azure Data Lake Gen2 Access via ACLs only (no RBAC)

my goal is to restrict access to a Azure Data Lake Gen 2 storage on a directory level (which should be possible according to Microsoft's promises).
I have two directories data, and sensitive in a data lake gen 2 container. For a specific user, I want to grant read access to the directory data and prevent any access to directory sensitive.
Along the documentation I removed all RBAC assignements for that user (on storage account as well as data lake container) so that I have no inherited read access on the directories. Then I added a Read-ACL statement to the data directory for that user.
My expectation:
The user can directly download files from the data directory.
The user can not access files of the sensitive directoy
Reality:
When I try to download files from the data directory I get a 403 ServiceCode=AuthorizationPermissionMismatch
az storage blob directory download -c containername -s data --account-name XXX --auth-mode login -d "./download" --recursive
RESPONSE Status: 403 This request is not authorized to perform this operation using this permission.
I expect that this should work. Otherwhise I only can grant access by assigning the Storage Blob Reader role but that applies to all directory and file within a container and cannot be overwritten by ACL statements. Did I something wrong here?
According to my research, if you want to grant a security principal read access to a file, we need to give the security principal Execute permissions to the container, and to each folder in the hierarchy of folders that lead to the file. for more details, please refer to the document
I found that I could not get ACLs to work without an RBAC role. I ended up creating a custom "Storage Blob Container Reader" RBAC role in my resource group with only permission "Microsoft.Storage/storageAccounts/blobServices/containers/read" to prevent access to listing and reading the actual blobs.

Resources