I am mounting an Azure File Share to /elasticdata/azshare on my Ubuntu 16.04 LTS virtual machine. I mount the drive using the following script:
sudo mkdir /elasticdata/fileshare
if [ ! -d "/etc/smbcredentials" ]; then
sudo mkdir /etc/smbcredentials
fi
if [ ! -f "/etc/smbcredentials/fileshare.cred" ]; then
sudo bash -c 'echo "username=fileshare" >> /etc/smbcredentials/fileshare.cred'
sudo bash -c 'echo "password=password" >> /etc/smbcredentials/fileshare.cred'
fi
sudo chmod 600 /etc/smbcredentials/fileshare.cred
sudo bash -c 'echo "//fileshare.file.core.windows.net/analysis /elasticdata/fileshare cifs nofail,vers=3.0,credentials=/etc/smbcredentials/fileshare.cred,dir_mode=0777,file_mode=0777,serverino" >> /etc/fstab'
sudo mount -t cifs //fileshare.file.core.windows.net/analysis /elasticdata/fileshare -o vers=3.0,credentials=/etc/smbcredentials/fileshare.cred,dir_mode=0777,file_mode=0777,serverino,uid=$(id -u elasticsearch),gid=$(id -g elasticsearch)
In my last line, I set the owner and the group of the mount location to be that of the user elasticsearch. I can verify this is true after the drive is mounted.
I then make a symlink like so:
ln -s /elasticdata/fileshare/analysis /etc/elasticsearch
In /etc/elasticsearch/analysis, I can see the owner and group to be that of the elasticsearch user.
When I restart my VM, the owner and group permissions I set revert back to that of the root user and my elasticsearch cluster is unable to start due to the following error:
[HTTP/1.1 500 Internal Server Error]{"error":{"root_cause":[{"type":"access_control_exception","reason":"access denied (\"java.io.FilePermission\" \"/etc/elasticsearch/analysis/charmapping.txt\" \"read\")"}],"type":"access_control_exception","reason":"access denied (\"java.io.FilePermission\" \"/etc/elasticsearch/analysis/charmapping.txt\" \"read\")"},"status":500}`.
How can I prevent the permissions from reverting? Or, how can I let elasticsearch gain access to the files a different way?
Try using /etc/fstab to mount the cifs filesystem at boot time.
A basic /etc/fstab looks like this
/dev/hda2 / ext2 defaults 1 1
/dev/hdb1 /home ext2 defaults 1 2
/dev/cdrom /media/cdrom auto ro,noauto,user,exec 0 0
/dev/fd0 /media/floppy auto rw,noauto,user,sync 0 0
proc /proc proc defaults 0 0
/dev/hda1 swap swap pri=42 0 0
My guess is you want to add a line to the file for the cifs file system. It should look something like this.
//fileshare.file.core.windows.net/analysis /elasticdata/fileshare cifs defaults,uid=<user id you want>,gid=<group id you want> 0 0
Related
I'm trying to mount an Azure File Share on a Databricks cluster and get a 'permission denied' error. mount: /mnt/test: permission denied. Adding the --verbose flag doesn't provide any additional information. Can someone please help troubleshoot?
The error appears when the mount is executed. sudo mount -t cifs //<storage_account>.file.core.windows.net/test /mnt/test -o credentials=/etc/smbcredentials/<storage_account>.cred,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30
sudo mkdir /mnt/test
if [ ! -d "/etc/smbcredentials" ]; then
sudo mkdir /etc/smbcredentials
fi
if [ ! -f "/etc/smbcredentials/<storage_account>.cred" ]; then
sudo bash -c 'echo "username=<storage_account>" >> /etc/smbcredentials/<storage_account>.cred'
sudo bash -c 'echo "password=<storage_account_key>" >> /etc/smbcredentials/<storage_account>.cred'
fi
sudo chmod 600 /etc/smbcredentials/<storage_account>.cred
sudo bash -c 'echo "//<storage_account>.file.core.windows.net/test /mnt/test cifs nofail,credentials=/etc/smbcredentials/<storage_account>.cred,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30" >> /etc/fstab'
sudo mount -t cifs //<storage_account>.file.core.windows.net/test /mnt/test -o credentials=/etc/smbcredentials/<storage_account>.cred,dir_mode=0777,file_mode=0777,serverino,nosharesock,actimeo=30
Before you start loading Azure Files to Azure Databricks, make sure the Azure Storage File module is installed.
To install Azure Storage File module, you need to use: pip install azure-storage-file
Once module is installed you follow the stackoverflow thread to load the Azure Files to Azure Databricks.
For more details, please go through the link below and it has step by step instructions to set up the azure blob storage file share for your databricks cluster: https://pypi.org/project/azure-storage-file-share/
Parrot is based on debian. All I do in Ubunto 18.04 lts and 20.04 lts works fine. In Parrot - not (at least not in my env). This is fresh installation, default, static IP, fully patched and after few reboots.
Windows is 8.1 pro in domain (2012R2 forest level), fully patched, no antivirus, firewall enables traffic. User is domain admin with no special chars in name and password, just to make it work.
So, to make it easier I do everything in command line as root (sudo -i).
nano /scripts/creds
username=user1
password=Password1
domain=test.local
The command:
mount -t cifs //192.168.1.10/d$ /mnt/disk_d -o credentials=/scripts/creds
In new Linux installations highest SBM version is taken by default, like other things (yey), so forcing these don`t change much (it works).
It works from command line (sudo). No errors and there are windows files and folders in /mnt/disk_d
It works from bash file: "./mount_windows.sh" with this line inside.
It doesn`t work in /etc/fstab. Command
mount -a -v
generates "parse error at line 19 -- ignored", this line is for mount. Physical disks are "already mounted".
So I tried to add one or more of them:
"file_mode=0777,dir_mode=0777", "serverino" or "noserverino", "sec=ntlmv2", "perm", "auto", "vers=3.0", " 0 0"
or just mix everything with different position with no success. Please remember it works from command line with no additional options.
It doesn`t work from /etc/crontab.
mount.cifs sits in /sbin so everything is ok.
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
added:
* * * * * root mount -t cifs //192.168.1.10/d$ /mnt/disk_d -o credentials=/scripts/creds
53 * * * * * root mount -t cifs //192.168.1.10/d$ /mnt/disk_d -o credentials=/scripts/creds
#reboot mount -t cifs //192.168.1.10/d$ /mnt/disk_d -o credentials=/scripts/creds
#reboot root mount -t cifs //192.168.1.10/d$ /mnt/disk_d -o credentials=/scripts/creds
#reboot sudo bash -x /scripts/mount_windows.sh
Restart cron shows no errors:
"systemctl restart cron"
None of these mounted disk after a full reboot.
So I added
echo "1" >> /scripts/log.txt
to check if anything is proccessed. File is created and "1" is added.
After each reboot there is nothing in /var/log/messages.
I don`t know why is this so hard to make it work. It works from command line and from sh.
I have an existing directory on an Ubuntu 16.04 LTS virtual machine at /etc/elasticsearch. I also have created a file share in azure. I am able to mount file share to the VM successfully when the mount point is a new directory. However, when I attempt to mount the file share to /etc/elasticsearch, an existing directory that contains data, the existing directory's data gets overwritten completely by the contents of the file share. This causes me to lose the data that previously existed in /etc/elasticsearch, which I obviously do not want. I want the file share to be added in addition to the existing data in /etc/elasticsearch.
Here is what I tried:
if [ ! -d "/etc/smbcredentials" ]; then
sudo mkdir /etc/smbcredentials
fi
if [ ! -f "/etc/smbcredentials/credentials.cred" ]; then
sudo bash -c 'echo "username=username" >> /etc/smbcredentials/credentials.cred'
sudo bash -c 'echo "password=password" >> /etc/smbcredentials/credentials.cred'
fi
sudo chmod 600 /etc/smbcredentials/credentials.cred
sudo bash -c 'echo "//pathtofileshare/analysis /etc/elasticsearch cifs nofail,vers=3.0,credentials=/etc/smbcredentials/credentials.cred,dir_mode=0777,file_mode=0777,serverino" >> /etc/fstab'
sudo mount -t cifs //pathtofileshare/analysis /etc/elasticsearch -o vers=3.0,credentials=/etc/smbcredentials/credentials.cred,dir_mode=0777,file_mode=0777,serverino
Link to file share documentation
Many thanks in advance for any help
I don't believe this is an issue, it just how Linux mount works
http://man7.org/linux/man-pages/man8/mount.8.html
The previous contents (if any) and owner and mode of dir become invisible, and as long as this filesystem remains mounted, the pathname dir refers to the root of the filesystem on device.
I'm using "GlusterFS" Client, to mount the GlusterFS Volume on my Web Server. Below is the MOUNT command when I manually mount from the command line:
# mount -t glusterfs -o aux-gfid-mount gluster1:/gv0 /var/www/html
I don't know how to put that -o aux-gfid-mount option inside the /etc/fstab. So my fstab is still, lacking that option:
gluster1:/gv0 /var/www/html/ glusterfs defaults,_netdev,fetch-attempts=5 0 0
How do I put that -o aux-gfid-mount option inside the fstab please?
As per my comment:
gluster1:/gv0 /var/www/html/ glusterfs defaults,_netdev,aux-gfid-mount,fetch-attempts=5 0 0
I can't understand how exactly this works in Linux.
For example, I want only users in some group have access to execute some file (I hope this is possible without visudo).
I create a system user and system group like:
useradd -K UID_MIN=100 -K UID_MAX=499 -K GID_MIN=100 -K GID_MAX=499 -p \* -s /sbin/nologin -c "testusr daemon,,," -d "/var/testusr" testusr
I add my current user user to the group testusr (may be not cross platform):
adduser user testusr
I create some test shell file and set permissions:
touch test.sh
chmod ug+x test.sh
sudo chown testusr:testusr test.sh
But I still can't start test.sh as user:
./test.sh
-> Error
Now I look for some system groups like cdrom to check how they work. My user is in cdrom group and can use the cd rom on my computer:
$ ls -al /dev/cdrom
lrwxrwxrwx 1 root root 3 апр. 17 12:55 /dev/cdrom -> sr0
$ ls -al /dev/sr0
brw-rw----+ 1 root cdrom 11, 0 апр. 17 12:55 /dev/sr0
Addition:
./test.sh command starts to work as I want after system reboot. Strange...
I'm on Ubuntu Studio 15.10
The group changes are reflected only upon re-login.