Mount Azure File Share on Databricks Cluster - azure

I'm trying to mount an Azure File Share on a Databricks cluster and get a 'permission denied' error. mount: /mnt/test: permission denied. Adding the --verbose flag doesn't provide any additional information. Can someone please help troubleshoot?
The error appears when the mount is executed. sudo mount -t cifs //<storage_account>.file.core.windows.net/test /mnt/test -o credentials=/etc/smbcredentials/<storage_account>.cred,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30
sudo mkdir /mnt/test
if [ ! -d "/etc/smbcredentials" ]; then
sudo mkdir /etc/smbcredentials
fi
if [ ! -f "/etc/smbcredentials/<storage_account>.cred" ]; then
sudo bash -c 'echo "username=<storage_account>" >> /etc/smbcredentials/<storage_account>.cred'
sudo bash -c 'echo "password=<storage_account_key>" >> /etc/smbcredentials/<storage_account>.cred'
fi
sudo chmod 600 /etc/smbcredentials/<storage_account>.cred
sudo bash -c 'echo "//<storage_account>.file.core.windows.net/test /mnt/test cifs nofail,credentials=/etc/smbcredentials/<storage_account>.cred,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30" >> /etc/fstab'
sudo mount -t cifs //<storage_account>.file.core.windows.net/test /mnt/test -o credentials=/etc/smbcredentials/<storage_account>.cred,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30

Before you start loading Azure Files to Azure Databricks, make sure the Azure Storage File module is installed.
To install Azure Storage File module, you need to use: pip install azure-storage-file
Once module is installed you follow the stackoverflow thread to load the Azure Files to Azure Databricks.
For more details, please go through the link below and it has step by step instructions to set up the azure blob storage file share for your databricks cluster: https://pypi.org/project/azure-storage-file-share/

Related

Docker run "error while creating mount source path '[...]': mkdir [...]: permission denied"

I'm trying to mount a directory in Docker run:
docker run --restart always -t -v /home/dir1/dir2/dir3:/dirX --name [...]
But I get the error:
error while creating mount source path '/home/dir1/dir2/dir3': mkdir /home/dir1/dir2/dir3: permission denied.
All the directories exist for sure, and the strange thing is when trying to mount dir2 and not dir3 it is working ok:
docker run --restart always -t -v /home/dir1/dir2/:/dirX --name [...] # THIS IS WORKING
All the directories ('dir2' and 'dir3') have the same permissions: drwxr-x---
Any suggestions on what might be the problem? why one is working and the other don't?
Thanks
Check the permission for the folder you're trying to mount docker with ls -la, you might need to modify the permissons with chmod.
If you don't want to modify permissions, just add sudoto the beggining of the command.
sudo docker run --restart always -t -v /home/dir1/dir2/dir3:/dirX --name [...]

Not all user-data steps are executed in a Terraform-managed AWS EC2 instance?

Terraform 0.12.x
I'm creating an AWS EC2 instance and want to execute shell scripts at start up so I put them in the resource's user_data_base64. I see some executed, but not all.
locals {
user_data = <<EOF
#!/bin/bash
systemctl start docker.service
if [ ! -d /mnt/jenkins_master ]; then
mkdir -p /mnt/jenkins_master
mount /dev/xvdf /mnt/jenkins_master
fi
cp -f /etc/fstab /etc/fstab.bak
echo "UUID=`blkid -o value -s UUID /dev/xvdf` /mnt/jenkins_master ext4 defaults,nofail 0 2" >> /etc/fstab
sudo su - jenkins
`aws ecr get-login --no-include-email --region us-east-1`
docker pull "${var.ecr_url}/${var.jenkins_image}:${var.jenkins_version}"
docker run -d -p 8080:8080 -v /mnt/jenkins_master/jenkins_home:/var/jenkins_home --name "${var.jenkins_image} ${var.ecr_url}/${var.jenkins_image}:${var.jenkins_version}"
EOF
}
resource "aws_instance" "jenkins_master_blue" {
...
user_data_base64 = base64encode(local.user_data)
...
}
I don't see my echo into /etc/fstab nor the docker run ... commands executed.
Here is couple mistake looking commands in your script
As I know AWS do not allow to connect to EC2 as root user directly. For example, for CentOS it will be centos user, or ec2-user for Amazon Linux. Did you switch to root before running this script? If no, you need to use sudo prefix to be able edit fstab and systemctl start docker.service.
You are switching to jenkins user: sudo su - jenkins, and after that trying to run container as jenkins user. Did you add jenkins user to docker group? If no, jenkins user do not have permissions to run containers.
You trying to create directory mkdir -p /mnt/jenkins_master as userX and mount mount /dev/xvdf /mnt/jenkins_master it as userX. Usually that means the only userX will be able to access (at least write permissions) this directory. Did you check is jenkins user has read/write permissions in this directory?
Why you switching sudo su - jenkins to jenkins user? jenkins has rights to control docker?
Don't forget to add docker service in autorun if that is not added yet. sudo systemctl enable docker.service.
Simple Explanation: Looks like problem reason is insufficient permissions

Azure File Share Owner/Group Permissions Revert On VM Reboot

I am mounting an Azure File Share to /elasticdata/azshare on my Ubuntu 16.04 LTS virtual machine. I mount the drive using the following script:
sudo mkdir /elasticdata/fileshare
if [ ! -d "/etc/smbcredentials" ]; then
sudo mkdir /etc/smbcredentials
fi
if [ ! -f "/etc/smbcredentials/fileshare.cred" ]; then
sudo bash -c 'echo "username=fileshare" >> /etc/smbcredentials/fileshare.cred'
sudo bash -c 'echo "password=password" >> /etc/smbcredentials/fileshare.cred'
fi
sudo chmod 600 /etc/smbcredentials/fileshare.cred
sudo bash -c 'echo "//fileshare.file.core.windows.net/analysis /elasticdata/fileshare cifs nofail,vers=3.0,credentials=/etc/smbcredentials/fileshare.cred,dir_mode=0777,file_mode=0777,serverino" >> /etc/fstab'
sudo mount -t cifs //fileshare.file.core.windows.net/analysis /elasticdata/fileshare -o vers=3.0,credentials=/etc/smbcredentials/fileshare.cred,dir_mode=0777,file_mode=0777,serverino,uid=$(id -u elasticsearch),gid=$(id -g elasticsearch)
In my last line, I set the owner and the group of the mount location to be that of the user elasticsearch. I can verify this is true after the drive is mounted.
I then make a symlink like so:
ln -s /elasticdata/fileshare/analysis /etc/elasticsearch
In /etc/elasticsearch/analysis, I can see the owner and group to be that of the elasticsearch user.
When I restart my VM, the owner and group permissions I set revert back to that of the root user and my elasticsearch cluster is unable to start due to the following error:
[HTTP/1.1 500 Internal Server Error]{"error":{"root_cause":[{"type":"access_control_exception","reason":"access denied (\"java.io.FilePermission\" \"/etc/elasticsearch/analysis/charmapping.txt\" \"read\")"}],"type":"access_control_exception","reason":"access denied (\"java.io.FilePermission\" \"/etc/elasticsearch/analysis/charmapping.txt\" \"read\")"},"status":500}`.
How can I prevent the permissions from reverting? Or, how can I let elasticsearch gain access to the files a different way?
Try using /etc/fstab to mount the cifs filesystem at boot time.
A basic /etc/fstab looks like this
/dev/hda2 / ext2 defaults 1 1
/dev/hdb1 /home ext2 defaults 1 2
/dev/cdrom /media/cdrom auto ro,noauto,user,exec 0 0
/dev/fd0 /media/floppy auto rw,noauto,user,sync 0 0
proc /proc proc defaults 0 0
/dev/hda1 swap swap pri=42 0 0
My guess is you want to add a line to the file for the cifs file system. It should look something like this.
//fileshare.file.core.windows.net/analysis /elasticdata/fileshare cifs defaults,uid=<user id you want>,gid=<group id you want> 0 0

How to mount azure file share to existing directory on linux vm

I have an existing directory on an Ubuntu 16.04 LTS virtual machine at /etc/elasticsearch. I also have created a file share in azure. I am able to mount file share to the VM successfully when the mount point is a new directory. However, when I attempt to mount the file share to /etc/elasticsearch, an existing directory that contains data, the existing directory's data gets overwritten completely by the contents of the file share. This causes me to lose the data that previously existed in /etc/elasticsearch, which I obviously do not want. I want the file share to be added in addition to the existing data in /etc/elasticsearch.
Here is what I tried:
if [ ! -d "/etc/smbcredentials" ]; then
sudo mkdir /etc/smbcredentials
fi
if [ ! -f "/etc/smbcredentials/credentials.cred" ]; then
sudo bash -c 'echo "username=username" >> /etc/smbcredentials/credentials.cred'
sudo bash -c 'echo "password=password" >> /etc/smbcredentials/credentials.cred'
fi
sudo chmod 600 /etc/smbcredentials/credentials.cred
sudo bash -c 'echo "//pathtofileshare/analysis /etc/elasticsearch cifs nofail,vers=3.0,credentials=/etc/smbcredentials/credentials.cred,dir_mode=0777,file_mode=0777,serverino" >> /etc/fstab'
sudo mount -t cifs //pathtofileshare/analysis /etc/elasticsearch -o vers=3.0,credentials=/etc/smbcredentials/credentials.cred,dir_mode=0777,file_mode=0777,serverino
Link to file share documentation
Many thanks in advance for any help
I don't believe this is an issue, it just how Linux mount works
http://man7.org/linux/man-pages/man8/mount.8.html
The previous contents (if any) and owner and mode of dir become invisible, and as long as this filesystem remains mounted, the pathname dir refers to the root of the filesystem on device.

Why won't mount.cifs use my credential file?

I have a script that needs to mount a Windows share to a Linux box, run a script, then unmount it. Despite following the man page for mount.cifs the command fails to recognize the credential file.
I made sure file sharing packages were present:
sudo yum install samba-client samba-common cifs-utils
Created drive that network share will mount to
sudo mkdir /share/
Created the credential file
sudo vim /root/.cifs
.cifs file contents
username=uname
password=pword
Created my .sh file
sudo vim /usr/bin/scritp.sh
script.sh contents
#!bin/bash
mount.cifs //ipaddress/share /share/ -o credentials=/root/.cifs
<script which makes use of the share>
umount /share/
Made the script executable
sudo chmod u+x /usr/bin/script.sh
Tested script
cd /usr/bin
sudo ./script.sh
Despite having the credential file specified, I am still prompted for a password for root user (connecting to Windows share with no "root" user"
Output from running script:
Password for root#//ipaddress/share:
Can anyone figure out what I have done wrong? It seems consistent with all documentation I have read.
For some reason, modifying the script to the following worked:
mount -t cifs -o credentials=/root/.cifs //ipaddress/share /share/
cd /share/
./script.sh
umount /share/
Not sure why, since mount -t cifs just invokes mount.cifs, but if you are experiencing the same issue, that's how I finally got around it.

Resources