I've been using this line in /etc/fstab for mounting a storage device to my host:
//url.to-my-storage.com/mystorage /mnt/backup cifs
iocharset=utf8,rw,credentials=/etc/backup-credentials.txt,uid=1000,gid=1000,file_mode=0660,dir_mode=0770
0 0
I was mounting it to another host, and I ran this to protect the files from change through the new host:
chmod -R 444 /mnt/backup
(I tried to protect the storage from writing from this host, which turned out to change the mode of all the storage files)
I assume the missing executable permissions what causing me this:
$ sudo mount -a
mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I tried unmounting and mounting again, that didn't help, got the same permission error when using the mount command.
ls the dir shows this:
$ ls -la /mnt/backup
?????????? ? ? ? ? ? backup
HELP !
Dismounting a "Locked Out" Network Drive
To dismount a "locked out" network drive, you can try to force the unmount:
umount -f -t cifs /mnt/backup
If you are having trouble dismounting a drive, make sure that you don't have a console open somewhere where the current working directory (CWD) on the drive which you are trying to dismount, or have a file open in an editor or player somewhere or such.
Properly Mounting a Network Drive
You should add your permissions in your mount options rather than trying to apply them afterwards. You would want to replace these mount options:
rw,file_mode=0660,dir_mode=0770
with
ro
Currently you are mounting your CIFS drive as read-write (rw), giving files read-write permission (file_mode=0660) and directories read-write-execute (dir_mode=0770). Simply mounting the drive as read-only (ro) should suffice. (If you do need to fine tune the file and dir modes, rather use umask.)
I would also advise you to double check whether you are using uid and gid correctly: if the user ID or group ID used gets deleted, that could also lead to problems.
References
https://linux.die.net/man/8/mount
https://en.wikipedia.org/wiki/File_system_permissions
https://oracletechdba.blogspot.com/2017/06/umount-lsof-warning-cant-stat-cifs-file.html
https://stackoverflow.com/a/40527234/171993
Related
I access my Azure VM on linux. Using df -kh, I can see my /dev/sdb1 temporary disk
(https://i.stack.imgur.com/zKXmQ.png)
$ sudo -i blkid
...
/dev/sdb1: PARTUUID="7ec06285-01"
...
I want to use it to store data however, despite, googling and reading the Azure documentation, I did not find any way to add data to it.
cp test /dev/sdb1
cp: cannot create regular file '/dev/sdb1': Permission denied
sudo cp test /dev/sdb1
sudo: unable to resolve host HubertProduction: Temporary failure in name resolution
mkdir /dev/sdb1/TEST
mkdir: cannot create directory ‘/dev/sdb1/TEST’: Not a directory
How can I use /dev/sdb1 to store data and access to them?
It is mounted so do I need to format it? if so how?
All the post I found are about the fact this is a temp storage with no backup: I understand it and this is not the issue here.
I created one Linux Ubuntu VM and created one directory and mounted it on sdb1 temporary disk like below
Without sudo :-
mkdir /data2
mkdir: cannot create directory ‘/data2’: Permission denied
With sudo, It worked:-
sudo mkdir /data2
Mounted dir data2 on sdb1
sudo mount /dev/sdb1 /data2
cd /data2
Accessed data2
siliconuser#siliconvm:/data2$ lsblk
Now you can directly create files in /data2 and those files will be stored in the temporary disk sdb1
Refer below :-
You do not need to format the disk as it is already mounted on Azure Linux VM. But you can change its mounted directory from mnt to dir.
You can create the file by moving to /mnt directory where temporary disk sdb1 is mounted by default after vm creation without a need to mount sdb1 to another directory :-
Refer below :-
Its also showing DATALOSS_WARNING readme file which states that the files created under this directory will be deleted as this is a temporary disk.
Reference :-
https://amalgjose.com/2021/09/01/how-to-add-a-new-disk-to-a-linux-server-without-downtime-or-reboot/
I want to create a file in sys/kernel/security folder in Linux.
But sudo touch test returns permission error.
After sudo chmod 777 /sys/kernel/security it fails, so I tried to change permissions for /sys folder (yes, I know this is a bad way) and sudo -i. Files does not creates, but in all cases it sets correctly - drwxrwxrwx.
And now I actually have no ideas, so I hope to your tips.
Thanks.
/sys/kernel/security is Linux Kernel Security Module (LSM) space where kernel security module can show their data both r/w.
mount | grep security
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
This is another virtual file system mounted of /sys. You can't create files here and there is no meaning at all to create files here.
See, securityfs details here!
I need to run an application on a VM , where I can do my set up in a script that will be run as root when the machine is built.
In this script I would like to mount a windows FS, so using CIFS.
So I am writing the following in the fstab:
//win/dir /my/dir cifs noserverino,ro,uid=1002,gid=1002,credentials=/root/.secret 0 0
After this, still in the same script, I try to mount it:
mount /my/dir
THat results in output of 2 lines for each file:
chown: changing ownership of `/my/dir/afile' Read-only file system
Because I have a lot of files, this takes forever...
With the same fstab I have asked an admin to manually mount the same directory :
sudo mount /my/dir
-> this is very quick with NO extra output.
I assume the difference of behavior is due to the fact that the script is run as root.
Any idea how to avoid the issue while keeping the idea of the script run as root ( this is not under my control )
Cheers.
Renaud
I've updated my openwrt firmware using the web interface. Now the web interface is unreachable.
I lost my root password so i started my router (wr1043nd) in failsafe mode, but the mount_root command is not working:
$mount_root
""/bin/ash: mount_root: not found""
Any clue? I can't find any solution in the docs/ online
You can mount jffs2 partition manually. This partition contains your configuration, so when you mount it, you will be able to edit root password.
Use this command: mount -t jffs2 /dev/mtdblock3 /mnt/. Please note that mtd number may vary in different routers. If there is nothing in /mnt dir after issuing this command, try another mtdblock number.
Then go to /mnt dir and remove /etc/shadow and /etc/passwd files from there to reset root password.
I'd like to mount a remote directory through sshfs on my Debian machine, say at /work. So I added my user to fuse group and I run:
sshfs user#remote.machine.net:/remote/dir /work
and everything works fine. However it would be very nice to have the directory mounted on boot. So I tried the /etc/fstab entry given below:
sshfs#user#remote.machine.net:/remote/dir /work fuse user,_netdev,reconnect,uid=1000,gid=1000,idmap=user 0 0
sshfs asks for password and mounts almost correctly. Almost because my regular user has no access to the mounted directory and when I run ls -la /, I get:
d????????? ? ? ? ? ? work
How can I get it with right permissions trough fstab?
Using option allow_other in /etc/fstab allows other users than the one doing the actual mounting to access the mounted filesystem. When you booting your system and mounting your sshfs, it's done by user root instead of your regular user. When you add allow_other other users than root can access to mount point. File permissions under the mount point still stay the same as they used to be, so if you have a directory with 0700 mask there, it's not accessible by anyone else but root and the owner.
So, instead of
sshfs#user#remote.machine.net:/remote/dir /work fuse user,_netdev,reconnect,uid=1000,gid=1000,idmap=user 0 0
use
sshfs#user#remote.machine.net:/remote/dir /work fuse user,_netdev,reconnect,uid=1000,gid=1000,idmap=user,allow_other 0 0
This did the trick for me at least. I did not test this by booting the system, but instead just issued the mount command as root, then tried to access the mounted sshfs as a regular user.
Also to complement previous answer:
You should prefer the [user]#[host] syntax over the sshfs#[user]#[host] one.
Make sure you allow non-root users to specify the allow_other mount option in /etc/fuse.conf
Make sure you use each sshfs mount at least once manually while root so the host's signature is added to the .ssh/known_hosts file.
$ sudo sshfs [user]#[host]:[remote_path] [local_path] -o allow_other,IdentityFile=[path_to_id_rsa]
REF: https://wiki.archlinux.org/index.php/SSHFS
Also, complementing the accepted answer: there is a need that the user on the target has a right to shell, on target machine: sudo chsh username -> /bin/bash.
I had a user who had /bin/false, and this caused problems.