mount unmount without sudo - linux

I am trying to write a script that would ssh into a host, perform mount operation there, run some other commands and exit.
other commands (cd, cp) do not require sudo privelages but mount option requries sudo permission. I want to write a script that would do:
ssh user#server "mount -t nfs xx.xx.xx.xx:/ /nfs -o rsize=4096,wsize=4096 ; cp pqr rst ; umount /nfs ;"
and some other non-sudo commands. How can I do this without a sudo option and without entering any passwords when the script is running.

Desktop linux distributions use udisks to grant non-root users limited mounting priviliges.
udisks version 2
udisksctl mount -b [device]
udisks version 1
udisks --mount [device]
Of course, if we are talking about a server VM, then these tools might not be installed.
Installing them would require root access (once)

You must add /nfs entry to /etc/fstab on the server host.
In the list of options of the entry must be option user or users (depends on that if you want that user could unmount the filesystem or not).
Example:
xx.xx.xx.xx:/ /nfs nfs rsize=4096,wsize=4096,user 0 0

You can allow that user to mount without needing sudo power.
Use NOPASSWD directory
Follow this Link.
Or you may prefer to write expect script which will have password written and password will be entered when it prompts for it.

Related

rsyncing between two non-login users

I have two machines A and B, both of which have 3 users:
root (I don't know a password can just switch using sudo su -)
login (used for sshing into both machines, has a password, is a sudoer)
mysql (standard non-interactive user running mysql server)
What I need to do is to rsync data directory (dir) belonging to mysql from machine A to machine B.
Obviously I can't just do:
rsync -avpE /dir/ B:/dir/
Because neither A nor B have read access to dir
I can't do:
sudo -u mysql rsync -avpE /dir/ B:/dir/
Because now A has access to dir but B doesn't.
So is it possible to construct an rsync command so I copy data across without using some temporary space?
rsync has an option called --rsync-path that might help you:
$ rsync |& grep rsync-path
--rsync-path=PROGRAM specify the rsync to run on the remote machine
The idea is to ask (the local) rsync to ssh to the remote machine (as user login) and then when it wants to call rsync on the remote machine, have it call sudo -u mysql rsync instead of plain rsync. So something like that:
sudo -u mysql rsync -avpE --rsync-path="sudo -u mysql rsync" /dir/ login#B:/dir/
Of course for this to work, the user login on the remote machine must be able to sudo -u mysql without a password.

Changing user to root when connected to a linux server and copying files

My script is coded in a way that doesn't allow you to connect to a server directly by root. This code basically copies files from a server to my computer and it works but I don't have access to many files because only root can access them. How can I connect to a server as a user and then copy its files by switching to root?
Code I want to change:
sshpass -p "password" scp -q -r username#74.11.11.11:some_directory copy_it/here/
In other words, I want to be able to remotely copy files which are only accessible to root on a remote server, but don't wish to access the remote server via ssh/scp directly as root.
Is it possible through only ssh and not sshpass?
If I understand your question correctly, you want to be able to remotely copy files which are only accessible to root on the remote machine, but you don't wish to (or can't) access the remote machine via ssh/scp directly as root. And a separate question is whether it could be done without sshpass.
(Please understand that the solutions I suggest below have various security implications and you should weigh up the benefits versus potential consequences before deploying them. I can't know your specific usage scenario to tell you if these are a good idea or not.)
When you ssh/scp as a user, you don't have access to the files which are only accessible to root, so you can't copy all of them. So you need to instead "switch to root" once connected in order to copy the files.
"Switching to root" for a command is accomplished by prefixing it with sudo, so the approach would be to remotely execute commands which copy the files via sudo to /tmp on the remote machine, changes their owner to the connected user, and then remotely copy them from /tmp:
ssh username#74.11.11.11 "sudo cp -R some_directory /tmp"
ssh username#74.11.11.11 "sudo chown -R username:username /tmp/some_directory"
scp -q -r username#74.11.11.11:/tmp/some_directory copy_it/here/
ssh username#74.11.11.11 "rm -r /tmp/some_directory"
However, sudo prompts for the user's password, so you'll get a "sudo: no tty present and no askpass program specified" error if you try this. So you need to edit /etc/sudoers on the remote machine to authorize the user to use sudo for the needed commands without a password. Add these lines:
username ALL=NOPASSWD: /bin/cp
username ALL=NOPASSWD: /bin/chown
(Or, if you're cool with the user being able to execute any command via sudo without being prompted for password, you could instead use:)
username ALL=NOPASSWD: ALL
Now the above commands will work and you'll be able to copy your files.
As for avoiding using sshpass, you could instead use a public/private key pair, in which a private key on the local machine unlocks a public key on the remote machine in order to authenticate the user, rather than a password.
To set this up, on your local machine, type ssh-keygen. Accept the default file (/home/username/.ssh/id_rsa). Use an empty passphrase. Then append the file /home/username/.ssh/id_rsa.pub on the local machine to /home/username/.ssh/authorized_keys on the remote machine:
cat /home/username/.ssh/id_rsa.pub | ssh username#74.11.11.11 \
"mkdir -m 0700 -p .ssh && cat - >> .ssh/authorized_keys && \
chmod 0600 .ssh/authorized_keys"
Once you've done this, you'll be able to use ssh or scp from the local machine without password authorization.

Sshfs as regular user through fstab

I'd like to mount a remote directory through sshfs on my Debian machine, say at /work. So I added my user to fuse group and I run:
sshfs user#remote.machine.net:/remote/dir /work
and everything works fine. However it would be very nice to have the directory mounted on boot. So I tried the /etc/fstab entry given below:
sshfs#user#remote.machine.net:/remote/dir /work fuse user,_netdev,reconnect,uid=1000,gid=1000,idmap=user 0 0
sshfs asks for password and mounts almost correctly. Almost because my regular user has no access to the mounted directory and when I run ls -la /, I get:
d????????? ? ? ? ? ? work
How can I get it with right permissions trough fstab?
Using option allow_other in /etc/fstab allows other users than the one doing the actual mounting to access the mounted filesystem. When you booting your system and mounting your sshfs, it's done by user root instead of your regular user. When you add allow_other other users than root can access to mount point. File permissions under the mount point still stay the same as they used to be, so if you have a directory with 0700 mask there, it's not accessible by anyone else but root and the owner.
So, instead of
sshfs#user#remote.machine.net:/remote/dir /work fuse user,_netdev,reconnect,uid=1000,gid=1000,idmap=user 0 0
use
sshfs#user#remote.machine.net:/remote/dir /work fuse user,_netdev,reconnect,uid=1000,gid=1000,idmap=user,allow_other 0 0
This did the trick for me at least. I did not test this by booting the system, but instead just issued the mount command as root, then tried to access the mounted sshfs as a regular user.
Also to complement previous answer:
You should prefer the [user]#[host] syntax over the sshfs#[user]#[host] one.
Make sure you allow non-root users to specify the allow_other mount option in /etc/fuse.conf
Make sure you use each sshfs mount at least once manually while root so the host's signature is added to the .ssh/known_hosts file.
$ sudo sshfs [user]#[host]:[remote_path] [local_path] -o allow_other,IdentityFile=[path_to_id_rsa]
REF: https://wiki.archlinux.org/index.php/SSHFS
Also, complementing the accepted answer: there is a need that the user on the target has a right to shell, on target machine: sudo chsh username -> /bin/bash.
I had a user who had /bin/false, and this caused problems.

Cannot access the shared folder in Virtual Box

I have problem with accessing the shared folder.
My host OS is Windows 7 Enterprise Edition SP1, and the guest OS is Ubuntu Linux 10.04 Desktop Version. I'm using Virtual Box 4.2.10, and I have installed VBox guest add-on and Oracle VM VirtualBox Extension Pack.
When I put commend:
mat#mat-desktop:~$ cd /media/sf_MAT/
bash: cd: /media/sf_MAT/: Permission denied
again with sudo:
sudo cd /media/sf_MAT/
sudo: cd: command not found
What could be the solution?
The issue is that your user "mat" is not in the same group as "vboxsf". This group "vboxsf" is the group which has read/write permissions to that folder. Also the root has permission to that folder because its in the group "vboxsf".
What you need is to add your user "mat" to the same group. Start your terminal and write the following line:
sudo usermod -aG vboxsf mat
sudo - because you need root permission
usermod - the command to change the user properties
-a means append to the group
-G means you will supply the group name now
vboxsf is the group name that you want your user to be in
mat is your username
A reboot, or a logout, may be required for changes to take affect.
After this operation you can verify that your user is indeed in the vboxsf group by doing this:
cat /etc/group | grep "vboxsf"
you will see your username there.
Now you shall be able to access that folder. If any issue, just comment here and I will tell you alternative methods.
Also, if all of this sounds too geeky, you can do the same thing using the graphical tools. One guide is here http://www.howtogeek.com/75705/access-shared-folders-in-a-virtualbox-ubuntu-11.04-virtual-machine/
Also, in new virtual box - 4.3.20 I guess, they have this new feature of drag and drop where you can just drag files and folders to your virtual machine just by dragging. Isn't that nice. :)
Open your Virtual Machine's Terminal. Type sudo su then enter your password.
Write the following commands
sudo usermod -a -G vboxsf your_account_name
sudo chown -R your_account_name:users /media/your_share_folder_name/
Example sudo usermod -a -G vboxsf mat
Example sudo chown -R mat:users /media/sf_MAT/
Now reboot your Virtual Machine and check the shared folder again
I have this problem to. The problem seems to be with your user account not having permission to use the folders. The only solution I have is to enter root using the su command. You can then read, write, and navigate freely. You might have to set a root password first using sudo passwd root.
Reason : sudo cd will not work as sudo works on program and not command. cd is an inbuilt command.
Soluiton: try sudo -i ..this will elevate you to super user.
Now you will be logged in as root and use any command you wish
eg.
sudo -i
cd folder/path
use exit to return back to normal user.
You only need to follow these steps:
in the terminal execute:
sudo adduser yourUserName vboxsf
enter your root password, expect the following message:
Adding user `yourUserName' to group `vboxsf' ...
Adding user yourUserName group vboxsf
Done.
Log out and back in.
You now can access your shared folders (with the limitations you set for them via VirtualBox)
For all other just add new optical drive in storage(Via setting) and add ISO manually(It is inside installed directory) in Host OS. Now click on mounted drive and install in Guest OS.
Reboot enjoy

sudoers NOPASSWD ineffective in desktop launcher script

I have a launcher script in Linux Mint 13 so that I can click an icon on the desktop to mount my NAS. In order to use /bin/mount without a password I must add this line to sudoers:
<username> ALL = NOPASSWD: /bin/mount
The script to mount the NAS is very simple:
#!/bin/bash
if [ 0 = `sudo mount |grep -c nasbox` ]
then
sudo mount -a
fi
If I use a terminal my script works without the need to enter a password but when it is run from a launcher (using "Application in Terminal") it asks for the password. If I give the password it accepts it and runs - so it must know which user is running it and allow the user to use sudo, so it does honour part of sudoers, but it doesn't honour the NOPASSWD keyword for /bin/mount. How do I get the NOPASSWD to work here?
Delete /var/run/sudo, /var/lib/sudo, and /var/db/sudo to flush out anything that sudo may have cached. Then make sure that your system's time and date are set correctly. As a (paranoid) security measure, sudo may prompt for the password if it determines that the system clock is unreliable.
This actually happened to me once on a system without a persistent RTC clock. I think I solved it by removing those directories and setting the system clock to a future value at startup. The sudo utility may have changed by now, so I'm not sure if this still applies, but give it a try!

Resources