mount cifs too long due to chown for each file - linux

I need to run an application on a VM , where I can do my set up in a script that will be run as root when the machine is built.
In this script I would like to mount a windows FS, so using CIFS.
So I am writing the following in the fstab:
//win/dir /my/dir cifs noserverino,ro,uid=1002,gid=1002,credentials=/root/.secret 0 0
After this, still in the same script, I try to mount it:
mount /my/dir
THat results in output of 2 lines for each file:
chown: changing ownership of `/my/dir/afile' Read-only file system
Because I have a lot of files, this takes forever...
With the same fstab I have asked an admin to manually mount the same directory :
sudo mount /my/dir
-> this is very quick with NO extra output.
I assume the difference of behavior is due to the fact that the script is run as root.
Any idea how to avoid the issue while keeping the idea of the script run as root ( this is not under my control )
Cheers.
Renaud

Related

Normal user touching a file in /var/run failed

I have a program called HelloWorld belonging to user test
HelloWorld will create a file HelloWorld.pid in /var/run to keep single instance.
I using following command to try to make test can access /var/run
usermod -a -G root test
However, when I run it, falied
could someone help me?
What are the permissions on /var/run? On my system, /var/run is rwxr-xr-x, which means only the user root can write to it. The permissions do not allow write access by members of the root group.
The normal way of handling this is by creating a subdirectory of /var/run that is owned by the user under which you'll be running your service. E.g.,
sudo mkdir /var/run/helloworld
sudo chown myusername /var/run/helloworld
Note that /var/run is often an ephemeral filesystem that disappears when your system reboots. If you would like your target directory to be created automatically when the system boots you can do that using the systemd tmpfiles service.
Some linux systems store per-user runtime files in /var/run/user/UID/.
In this case you can create your pid file in /var/run/user/$(id -u test)/HelloWorld.pid.
Alternatively just use /tmp.
You may want to use the user's name as a prefix to the pid filename to avoid collision with other users, for instance /tmp/test-HelloWorld.pid.

path /tmp does not correspond to a regular file

this happens when I have
an executable that is in the /tmp directory (say /tmp/a.out)
it is run by a root shell
linux
selinux on (default for RedHat, CentOS, etc)
Apparently trying to run an executable that sits in the /tmp/directory as root revokes the privileges. Any idea how to go around this issue, other than turning off selinux? Thanks
You can set file context on binary or directory (containing binary) that are in /tmp that you want to run.
sudo semanage fcontext -a -t bin_t /tmp/location
Then restorecon:
sudo restorecon -vR /tmp/location
Just have a look at the mount options for /tmp directory, most probably you have no-exec option on it (there are many security reasons of doing that, the first being that anyone can put a file in the /tmp directory)

Locked out of cifs mounted storage

I've been using this line in /etc/fstab for mounting a storage device to my host:
//url.to-my-storage.com/mystorage /mnt/backup cifs
iocharset=utf8,rw,credentials=/etc/backup-credentials.txt,uid=1000,gid=1000,file_mode=0660,dir_mode=0770
0 0
I was mounting it to another host, and I ran this to protect the files from change through the new host:
chmod -R 444 /mnt/backup
(I tried to protect the storage from writing from this host, which turned out to change the mode of all the storage files)
I assume the missing executable permissions what causing me this:
$ sudo mount -a
mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I tried unmounting and mounting again, that didn't help, got the same permission error when using the mount command.
ls the dir shows this:
$ ls -la /mnt/backup
?????????? ? ? ? ? ? backup
HELP !
Dismounting a "Locked Out" Network Drive
To dismount a "locked out" network drive, you can try to force the unmount:
umount -f -t cifs /mnt/backup
If you are having trouble dismounting a drive, make sure that you don't have a console open somewhere where the current working directory (CWD) on the drive which you are trying to dismount, or have a file open in an editor or player somewhere or such.
Properly Mounting a Network Drive
You should add your permissions in your mount options rather than trying to apply them afterwards. You would want to replace these mount options:
rw,file_mode=0660,dir_mode=0770
with
ro
Currently you are mounting your CIFS drive as read-write (rw), giving files read-write permission (file_mode=0660) and directories read-write-execute (dir_mode=0770). Simply mounting the drive as read-only (ro) should suffice. (If you do need to fine tune the file and dir modes, rather use umask.)
I would also advise you to double check whether you are using uid and gid correctly: if the user ID or group ID used gets deleted, that could also lead to problems.
References
https://linux.die.net/man/8/mount
https://en.wikipedia.org/wiki/File_system_permissions
https://oracletechdba.blogspot.com/2017/06/umount-lsof-warning-cant-stat-cifs-file.html
https://stackoverflow.com/a/40527234/171993

Input/Output error when copying files to a mount inlinux

I am having a linux mount on my jenkins build server. After a job in jenkins succeeds, a script is being called which copies the files from workspace to different directories in the mount. Each time I mount the copy operation succeeds but after few hours it fails with I/O error: cannot copy. I have to remount the share again to get this thing going.
Any ideas on the fix? I am struggling for 2 weeks now. I do not want to remount again and again.
Command I used: mount -t cifs -o rw,noperm,username=xyz,password=* //remoteserver/path /local/path.
Thanks
Not sure if this will help you. But, this something that I do for my scripts.
You said that you have a script that copies the files from workspace to the mount. Why don't you add a condition to a script, to check if the mount exists if not remount or something like that.

Sshfs as regular user through fstab

I'd like to mount a remote directory through sshfs on my Debian machine, say at /work. So I added my user to fuse group and I run:
sshfs user#remote.machine.net:/remote/dir /work
and everything works fine. However it would be very nice to have the directory mounted on boot. So I tried the /etc/fstab entry given below:
sshfs#user#remote.machine.net:/remote/dir /work fuse user,_netdev,reconnect,uid=1000,gid=1000,idmap=user 0 0
sshfs asks for password and mounts almost correctly. Almost because my regular user has no access to the mounted directory and when I run ls -la /, I get:
d????????? ? ? ? ? ? work
How can I get it with right permissions trough fstab?
Using option allow_other in /etc/fstab allows other users than the one doing the actual mounting to access the mounted filesystem. When you booting your system and mounting your sshfs, it's done by user root instead of your regular user. When you add allow_other other users than root can access to mount point. File permissions under the mount point still stay the same as they used to be, so if you have a directory with 0700 mask there, it's not accessible by anyone else but root and the owner.
So, instead of
sshfs#user#remote.machine.net:/remote/dir /work fuse user,_netdev,reconnect,uid=1000,gid=1000,idmap=user 0 0
use
sshfs#user#remote.machine.net:/remote/dir /work fuse user,_netdev,reconnect,uid=1000,gid=1000,idmap=user,allow_other 0 0
This did the trick for me at least. I did not test this by booting the system, but instead just issued the mount command as root, then tried to access the mounted sshfs as a regular user.
Also to complement previous answer:
You should prefer the [user]#[host] syntax over the sshfs#[user]#[host] one.
Make sure you allow non-root users to specify the allow_other mount option in /etc/fuse.conf
Make sure you use each sshfs mount at least once manually while root so the host's signature is added to the .ssh/known_hosts file.
$ sudo sshfs [user]#[host]:[remote_path] [local_path] -o allow_other,IdentityFile=[path_to_id_rsa]
REF: https://wiki.archlinux.org/index.php/SSHFS
Also, complementing the accepted answer: there is a need that the user on the target has a right to shell, on target machine: sudo chsh username -> /bin/bash.
I had a user who had /bin/false, and this caused problems.

Resources