I have mounted a bucket to archive old data. My problem is, a non-root user needs access to these files from time to time. Is there any way to grant her read access to this bucket/mount. She uses Windows if that matters.
You can give her read permissions over the bucket so she can see the files using cloud console, as you are using fuse, locally All files have permission bits 0644, and all directories have permission bits 0755, you can find more information on how to change those fuse permissions here
Related
I'm trying to deploy atmoz sftp images for multiple users. I am new to this technology.
Below are the points I have tried.
I took the template from GitHub and deployed it on azure and with the help of the template I'm able to create the two users(users1 and users2).
For users1 I have created the folder1 and for user2 folder2 and I'm able to see the same structure while login into sftp.
For both the folders I have created the different file share.
My requirement is now to show both the folders to both the users but with user defined permission. users1 should have write permission on folder1 and read permission on folder2 and user2 should have write permission on folder2 and only read permission on folder1.
SFTP login for first user i.e user1
Currently, the Azure Container Instance does not support to change the permission when you mount the Azure File Share. And you can see all the users home path are owned by the root user and the root group:
And when you execute the command mount inside the container instance, you can see it like this:
Both file_mode and the dir_mode are set with the permission 0777. And there is no property to change the mount options in the ARM template. So I'm afraid you cannot achieve your purpose.
Hi Have below questions.
I have a storage account and inside storage account, I have file shares.
And below is my folder structure
Root\Account 1
Root\Account 1\ReadOnly
Root\Account 1\ReadAndWrite
Root\Account 2
Root\Account 2\ReadOnly
Root\Account 2\ReadAndWrite
Now my questions are can I map my End users with Root\Account 2\ReadOnly or Root\Account 2\ReadAndWrite as their network-connected shared Drive “z:\”
I was actually trying with https://husseinsalman.com/securing-access-to-azure-storage-part-5-stored-access-policy/ blog post, here What I do not understand is the how to give SAS Signature to mount as a network folder ?
It's not possible to mount the specific directory, however you can set permission to files and directory.You can check Azure Active Directory Domain Services authentication on Azure Files assign permission for the directories
Azure Files identity-based authentication options for SMB access
Configure directory and file level permissions over SMB
If you mount the file share by using SMB, you don't have folder-level control over permissions. However, if you create a shared access signature by using the REST API or client libraries, you can specify read-only or write-only permissions on folders within the share.
I have a Gen2 storage account and created a container.
Folder Structure looks something like this
StorageAccount
->Container1
->normal-data
->Files 1....n
->sensitive-data
->Files 1....m
I want to give read only access to the user only for normal-data and NOT sensitive-data
This can be achieved by setting ACL's on the folder level and giving access to the security service principle.
But limitation of this approach is user can only access the files which are loaded into the directory after the ACL is set up, hence cannot access the files which are already present inside the directory.
Because of this limitation, new users cannot be given full read access (unless new users use the same service principle, which is not the ideal scenario in my usecase)
Please suggest a read-only access method in ADLS Gen2, where
If files are already present under a folder and a new user is onboarded, he should be able to read all the files under the folder
New user should get access to only normal-data folder and NOT to sensitive-data
PS : There is a script for assigning ACL's recursively. But as I will get close to million records each day under normal-data folder, it would not be feasible for me to use the recursive ACL script
You could create an Azure AD security group and give that group read only access to the read-only folder.
Then you can add new users to the security group.
See: https://learn.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-groups-create-azure-portal
my goal is to restrict access to a Azure Data Lake Gen 2 storage on a directory level (which should be possible according to Microsoft's promises).
I have two directories data, and sensitive in a data lake gen 2 container. For a specific user, I want to grant read access to the directory data and prevent any access to directory sensitive.
Along the documentation I removed all RBAC assignements for that user (on storage account as well as data lake container) so that I have no inherited read access on the directories. Then I added a Read-ACL statement to the data directory for that user.
My expectation:
The user can directly download files from the data directory.
The user can not access files of the sensitive directoy
Reality:
When I try to download files from the data directory I get a 403 ServiceCode=AuthorizationPermissionMismatch
az storage blob directory download -c containername -s data --account-name XXX --auth-mode login -d "./download" --recursive
RESPONSE Status: 403 This request is not authorized to perform this operation using this permission.
I expect that this should work. Otherwhise I only can grant access by assigning the Storage Blob Reader role but that applies to all directory and file within a container and cannot be overwritten by ACL statements. Did I something wrong here?
According to my research, if you want to grant a security principal read access to a file, we need to give the security principal Execute permissions to the container, and to each folder in the hierarchy of folders that lead to the file. for more details, please refer to the document
I found that I could not get ACLs to work without an RBAC role. I ended up creating a custom "Storage Blob Container Reader" RBAC role in my resource group with only permission "Microsoft.Storage/storageAccounts/blobServices/containers/read" to prevent access to listing and reading the actual blobs.
Technical Stack
MarkLogic 9.0
Cenos Linux
Azure Blob
Blobfuse
To make sure we do not have to worry about data disk size for MarkLogic Forest, we have configured Azure Blob to one of folder in Linux machine, so we do not have to worry about disk size.
There are few things i noticed
Need to create folder in Linux
Create folder and point it to above folder
Then configure Blobfuse else we are getting permission denied while creating forest
Use below command to give permission to all
chmod 777 -R
Now when we started importing using MarkLogic Content Pump (MLCP)
19/03/15 17:01:19 ERROR mapreduce.ContentWriter: SVC-FILSTAT: File status error: stat64 '/mnt/mycontainer/Forests/forest-01/000043e5': Permission denied
So if you look at below image
1st we tried with mycontainer but as soon as we map it to Azure Blob, it does not looks green as azureblob which is. We still need to map azureblob to "azureblob" folder.
It seems i am missing something here, anything to do with Azure Blob security settings?
With the test, when you mount the Azure Blob to Linux, for example, Ubuntu 18.04 (which I'm using), if you want to allow other users to use the mount directory, you can add the parameter -o allow_other when you execute the command blobfuse.
To allow access to all users, you can mount via the option -o
allow_other.
Also, I think you should give others permission through the command chown. For more details, see How to mount Blob storage as a file system with blobfuse.
First i would like to thanks Charles for his efforts and extended help on this issue, Thanks Charls :). I am sure this will help me sometime, somewhere.
I got link on how to setup MarkLogic on Aure
On Page No. 27, steps to Configuring MarkLogic for Azure Blob Storage
In summary it is
Create Storage account in Azure
Create Blob container
Go to MarkLogic server (http://localhost:8001)
Go to Security -> Credentials
Provide Storage account and Azure storage key
While creating MarkLogic Forest, mentioned container path in data directory
azure://mycontainer/mydirectory/myfile
And you are done. No Blobfuse, no drive mount, just a configuration in MarkLogic
Awesome!!
Its working like dream :)