In this url https://learn.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files, it says that:
Azure file share volume mount requires the Linux container run as root .
Azure File share volume mounts are limited to CIFS support.
I have an azure container instance with postgres and postgres doesn't allow me to run the server as root.
The Azure file share owner is set to root and the user group also set to root and I will not be able to change both the permissions and the owner and the user group of the azure file share folder?
I would want to set the owner and group of the folder to postgres for example and change its permissions.
Related
I have a VM (RockyLinux) and I login to it with Active Directory, so this account doesn't exist in the /etc/passwd file of this VM.
When I use this AD account, the uid and gid (for ex. uid=801224105 gid=801200414) are different from those in my container (docker).
The sources are in a volume, so I have a permissions problem in my container because the container use uid and gid 1000:1000.
I know I can map a user with userns-remap, but only if the user exists in /etc/passwd.
Do you have any idea ?
Many thanks
I'm trying to deploy atmoz sftp images for multiple users. I am new to this technology.
Below are the points I have tried.
I took the template from GitHub and deployed it on azure and with the help of the template I'm able to create the two users(users1 and users2).
For users1 I have created the folder1 and for user2 folder2 and I'm able to see the same structure while login into sftp.
For both the folders I have created the different file share.
My requirement is now to show both the folders to both the users but with user defined permission. users1 should have write permission on folder1 and read permission on folder2 and user2 should have write permission on folder2 and only read permission on folder1.
SFTP login for first user i.e user1
Currently, the Azure Container Instance does not support to change the permission when you mount the Azure File Share. And you can see all the users home path are owned by the root user and the root group:
And when you execute the command mount inside the container instance, you can see it like this:
Both file_mode and the dir_mode are set with the permission 0777. And there is no property to change the mount options in the ARM template. So I'm afraid you cannot achieve your purpose.
Hi Have below questions.
I have a storage account and inside storage account, I have file shares.
And below is my folder structure
Root\Account 1
Root\Account 1\ReadOnly
Root\Account 1\ReadAndWrite
Root\Account 2
Root\Account 2\ReadOnly
Root\Account 2\ReadAndWrite
Now my questions are can I map my End users with Root\Account 2\ReadOnly or Root\Account 2\ReadAndWrite as their network-connected shared Drive “z:\”
I was actually trying with https://husseinsalman.com/securing-access-to-azure-storage-part-5-stored-access-policy/ blog post, here What I do not understand is the how to give SAS Signature to mount as a network folder ?
It's not possible to mount the specific directory, however you can set permission to files and directory.You can check Azure Active Directory Domain Services authentication on Azure Files assign permission for the directories
Azure Files identity-based authentication options for SMB access
Configure directory and file level permissions over SMB
If you mount the file share by using SMB, you don't have folder-level control over permissions. However, if you create a shared access signature by using the REST API or client libraries, you can specify read-only or write-only permissions on folders within the share.
I have a Gen2 storage account and created a container.
Folder Structure looks something like this
StorageAccount
->Container1
->normal-data
->Files 1....n
->sensitive-data
->Files 1....m
I want to give read only access to the user only for normal-data and NOT sensitive-data
This can be achieved by setting ACL's on the folder level and giving access to the security service principle.
But limitation of this approach is user can only access the files which are loaded into the directory after the ACL is set up, hence cannot access the files which are already present inside the directory.
Because of this limitation, new users cannot be given full read access (unless new users use the same service principle, which is not the ideal scenario in my usecase)
Please suggest a read-only access method in ADLS Gen2, where
If files are already present under a folder and a new user is onboarded, he should be able to read all the files under the folder
New user should get access to only normal-data folder and NOT to sensitive-data
PS : There is a script for assigning ACL's recursively. But as I will get close to million records each day under normal-data folder, it would not be feasible for me to use the recursive ACL script
You could create an Azure AD security group and give that group read only access to the read-only folder.
Then you can add new users to the security group.
See: https://learn.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-groups-create-azure-portal
Technical Stack
MarkLogic 9.0
Cenos Linux
Azure Blob
Blobfuse
To make sure we do not have to worry about data disk size for MarkLogic Forest, we have configured Azure Blob to one of folder in Linux machine, so we do not have to worry about disk size.
There are few things i noticed
Need to create folder in Linux
Create folder and point it to above folder
Then configure Blobfuse else we are getting permission denied while creating forest
Use below command to give permission to all
chmod 777 -R
Now when we started importing using MarkLogic Content Pump (MLCP)
19/03/15 17:01:19 ERROR mapreduce.ContentWriter: SVC-FILSTAT: File status error: stat64 '/mnt/mycontainer/Forests/forest-01/000043e5': Permission denied
So if you look at below image
1st we tried with mycontainer but as soon as we map it to Azure Blob, it does not looks green as azureblob which is. We still need to map azureblob to "azureblob" folder.
It seems i am missing something here, anything to do with Azure Blob security settings?
With the test, when you mount the Azure Blob to Linux, for example, Ubuntu 18.04 (which I'm using), if you want to allow other users to use the mount directory, you can add the parameter -o allow_other when you execute the command blobfuse.
To allow access to all users, you can mount via the option -o
allow_other.
Also, I think you should give others permission through the command chown. For more details, see How to mount Blob storage as a file system with blobfuse.
First i would like to thanks Charles for his efforts and extended help on this issue, Thanks Charls :). I am sure this will help me sometime, somewhere.
I got link on how to setup MarkLogic on Aure
On Page No. 27, steps to Configuring MarkLogic for Azure Blob Storage
In summary it is
Create Storage account in Azure
Create Blob container
Go to MarkLogic server (http://localhost:8001)
Go to Security -> Credentials
Provide Storage account and Azure storage key
While creating MarkLogic Forest, mentioned container path in data directory
azure://mycontainer/mydirectory/myfile
And you are done. No Blobfuse, no drive mount, just a configuration in MarkLogic
Awesome!!
Its working like dream :)