I have remotely mounted a filesystem using SSHFS to directory /mnt/sshfs.
I need to find out using a shell script if this SSHFS mount is working correctly or if there is a connection reset by peer problem.
If I try to access such an SSHFS filesystem if it is in "disconnected state", the system freezes and waits until it eventually times out.
I want to avoid that. I need to know if SSHFS is working as expected or if there is some connection problem without freezing the system.
I guess just grep the mount command?
if (( $(mount | grep -e 'whatever you like'| wc -l) > 0 ));
then
echo "mounted";
fi
As far as I can tell, SSHFS has no other means of checking connectivity other then trying to access a file and failing.
You should be able to specify SSH options when mounting SSHFS that specify things like ServerAliveInterval and ServerAliveCountMax (see man ssh_config) that will terminate your SSH connection "early". Also make sure to use TCPKeepAlive to kill the connection if your internet connection should drop.
What I'd say would be an interesting way to tackle the problem of knowning the connection state would be to extend sshfs (which isn't really that complicated a FUSE module), and add an ioctl that you could query to figure out whether everything is in order, without blocking if it wasn't.
You can use "mountpoint". I just found out about it in this thread:
https://unix.stackexchange.com/a/39110
I tested it with sshfs and it reports "is a mountpoint" and "is not a mountpoint"
Related
I'm having an issue with a script used in a project I inherited that has little to no documentation, and am in the process of documenting everything. I'm trying to debug an issue with one line of a script that is executed on the host machine to call out to a LAN-attached Raspberry Pi with SSH to return some information about the Pi.
We already have working versions of this Raspberry Pi which can execute the script without issue, and I'm not sure what the difference is. When executed on the new one, it prompts for the root password on the Pi, but it has not done this on previous versions of the device. I assume it has something to do with the SSH configuration but I don't know enough about SSH to say what would be the cause.
The line in particular causing the issue is:
ssh -o StrictHostKeyChecking=no {host_name} uname -a &>/dev/null
rc=$? #gets the return value of the remote command so we can read the uname info
{host_name} of course is the actual host name it's connecting to, but I've left that part out for privacy reasons. The script is the same on both machines.
Both Pi devices are the same model and I'm having trouble narrowing down what could cause me to not be able to execute this command. Does anyone know what I need to configure in order to be able to execute this command on the Pi remotely?
Quick fix:
sshpass -p 'password' ssh -o StrictHostKeyChecking=no user#server
Detailed fix:
Most likely you would need to set up Async keys (public/Private) for proper passwordless login. Your command does not show you are using keys so I'm assuming you are not (e.g. -A or -i /path/to/key). Generally root user is blocked (I guess not your problem), I would set up another user for this or change sshd config. You could also Compare the sshd configurations between the Pi Boxes.
See: https://www.raspberrypi.org/documentation/remote-access/ssh/passwordless.md
Okay, so after some more digging around, I discovered that there was a separate .ssh directory under /root that contained an authorized_keys file. After copying this to the new Pi, it worked. I had been wondering all this time if there was a separate config folder for root, but I've never gone digging around /root, so I wasn't aware that it was there.
I would like to make a shutdown-script for my raspberry pi to shut down anothe raspberry pi over ssh.
The script works if it is running itself but at the shutdown routine the ssh command is not executed.
So that I have done until now:
Made the script in /etc/init.d:
#!/bin/sh
# the first thing is to test if the shutdown script is working
echo "bla bla bla " | sudo tee -a /test.txt
ssh pi#10.0.0.98 sudo shutdown -h now
Made it executable
sudo chmod +x /etc/init.d/raspi.sh
Made a symlink to the rc0.d
sudo ln -s /etc/init.d/raspi.sh /etc/rc0.d/S01raspi.sh
Now I know so far that the shutdown script is working outside of the shutdown routing by calling itself and the shutdown symlink I made is also working partially because I see the changes in the test.txt file every time I shut down.
Can anyone help me how to solve my problem?
Have you tried with single quotes?
The first link in Google has it
http://malcontentcomics.com/systemsboy/2006/07/send-remote-commands-via-ssh.html
What about the sudo, how do you solve entering the password?
https://superuser.com/questions/117870/ssh-execute-sudo-command
Please check this or other links on the web that have useful information.
I would have send all this in a comment but I cant yet because of reputation.
I have now got the script running by myself. I do not really know why it is now working but I write it down beneath and maybe someone else can clearifiy it.
I don´t think the first two changes at my system makes a difference but I also write it down. In the meanwhile because I do not managed the script to get working I had made a button to shutdown the system manually. Also I made a script which backs the mysql-database up (which is on the Raspberry Pi which I would like to switch off with the script) and copies the backup to the raspberry pi which should switch of the other raspberry automatically via the shutdown-script. This happens with scp and also for the password is a key generated.
I have also changed my script to get a log-message out of the script.
#!/bin/sh
ssh -t -t pi#10.0.0.99 'sudo shutdown -h now' >> /home/osmc/shutdown.log 2>&1
To get it into the shutdown-routine I used:
sudo update-rc.d raspi-b stop 01 0
I hope somebody can say me why my code now worked on the first day but not on the next few days until now.
I structured a command to suspend or shutdown a remote host over ssh. You may find this useful. This may be used to suspend / shutdown a remote computer without an interactive session and yet not keep a terminal busy. You will need to give permissions to the remote user to shutdown / suspend using sudo without a password. Additionally, the local and remote machines should be set up to SSH without an interactive login. The script is more useful for suspending the machine as a suspended machine will not disconnect the terminal.
local_user#hostname:~$ ssh remote_user#remote_host "screen -d -m sudo pm-suspend"
source: कार्यशाला (Kāryaśālā)
I am trying to figure out why bash autocompletion on filesystem is slow on my PC. My Linux machine is connected to an AD through PAM and I am suspecting bash is trying to query a network mount (which is slow since it queries PAM) every time I use TAB for autocomplete.
I have tried set -x and when I do autocomplete on /var the slowest operation is the following line:
[[ /var == ~* ]]
Also, the following line takes a few seconds to execute in bash when I am connected to the network whereas it returns immediately if it is not connected:
TEMP=~*
I would like to know what bash is trying to expand ~* to or find a workaround.
trying running it with strace
for example
strace echo $FOO
if the system is accessing your mount, you will know
I am a Windows admin and dev, I do not generally work with Linux so forgive me if this is in some way obvious.
I have a not so good Linux box, some older version of Open SUSE, and I have a script that unmounts the USB thumb drive, formats it, and then waits for the device to become ready again before it runs a script that does a copy/MD5 checksum verification on the source and destination file to ensure the copy was valid. The problem is that on one box the USB thumb drive does not become ready after the format in a consistent way. It takes anywhere from 1 to 2+ minutes before I can access the drive via /media/LABELNAME.
The direct path is /dev/sdb but, of course, I cannot access it directly via this path to copy the files. Here is my shell script as it stands:
#!/bin/bash
set -e
echo "Starting LABELNAME.\n\nUnmounting /dev/sdb/"
umount /dev/sdb
echo "Formatting /dev/sdb/"
mkfs.vfat -I -F32 -n "LABELNAME" /dev/sdb
echo "Waiting on remount..."
sleep 30
echo "Format complete. Running make master."
perl /home/labelname_master.20120830.pl
Any suggestions? How might I wait for the drive to become ready and detect it? I have seen Detecting and Writing to a USB Key / Thumb DriveAutomatically but quite frankly I don't even know what that answer means.
It seems that you have some automatic mounting service running which detects the flash disk and mounts the partition. However, you already know what the partition is, so I recommend that you simply mount the disk in your script, choosing a suitable mount point yourself.
mkfs.vfat -I -F32 -n "LABELNAME" /dev/sdb
echo "Format complete, remounting"
mount /dev/sdb $mountpoint #<-- you would choose $mountpoint
echo "Running make master."
perl /home/labelname_master.20120830.pl
I would like to script a sequence of commands involving multiple ssh and scp calls. On a daily basis, I find myself manually performing this task:
From LOCAL system, ssh to SYSTEM1
mkdir /tmp/data on SYSTEM1
from SYSTEM1, ssh to SYSTEM2
mkdir /tmp/data on SYSTEM2
from SYSTEM2, SSH to SYSTEM3
scp files from SYSTEM3:/data to SYSTEM2:/tmp/data
exit to SYSTEM2
scp files from SYSTEM2:/data and SYSTEM2:/tmp/data to SYSTEM1:/tmp/data
rm -fr SYSTEM2:/tmp/data
exit to SYSTEM1
scp files from SYSTEM1:/data and SYSTEM1:/tmp/data to LOCAL:/data
rm -fr SYSTEM1:/tmp/data
I do this process at LEAST once a day and it takes approximately 5-10 minutes going between the different systems and then cleaning up afterwards. I would really like to automate this in a bash script but my amateur attempts so far have been unsuccessful. As you might suspect, the systems communication is constrained, meaning LOCAL can only see System1, System2 can only see System1 and System3, system3 can only see system2, etc. You get the idea. What is the best way to do this? Additionally, System1 is a hub for many other systems so SYSTEM2 must be indicated by the user (System3 will always have the same relative IP/hostname compared to any SYSTEM2).
I tried just putting the commands in the proper order in a shell script and then manually typing in the passwords when prompted (which would already be a huge gain in efficiency) but either the method doesn't work or my execution of the script is wrong. Additionally, I would want to have a command line argument for the script that would take a pattern for which 'system2' to connect to, a pattern for the data to copy, and a target location for the data on the local system.
Such as
./grab_data system2 *05-14* ~/grabbed-data
I did some searching and I think my next step would be to have scripts on each system that perform the local tasks, and then execute the scripts via ssh commands from the respective remote system. Is there a better way? What commands should I look at using and what would be the general approach to this automating this sort of nested ssh and scp problem?
I realize my description may be a bit convoluted so please ask for clarification on any area that I did not properly describe.
Thanks.
You can simplify this process a lot by tunneling ssh connections over other ssh connections (see this previous answer). The way I'd do it is to create an .ssh/config file on the LOCAL system with the following entries:
Host SYSTEM3
ProxyCommand ssh -e none SYSTEM2 exec /usr/bin/nc %h %p 2>/dev/null
HostName SYSTEM3.full.domain
User system3user
Host SYSTEM2
ProxyCommand ssh -e none SYSTEM1 exec /usr/bin/nc %h %p 2>/dev/null
HostName SYSTEM2.full.domain
User system2user
Host SYSTEM1
HostName SYSTEM1.full.domain
User system1user
(That's assuming both intermediate hosts have netcat installed as /usr/bin/nc -- if not, you may have to find/install some equivalent way of gatewaying stdin&stdout into a TCP session.)
With this set up, you can use scp SYSTEM3:/data /data on LOCAL, and it'll automatically tunnel through SYSTEM1 and SYSTEM2 (and ask for the passwords for the three SYSTEMn's in order -- this can be a little confusing, especially if you mistype one).
If you're connecting to multiple systems, and especially if you have to forward connections through intermediate hosts, you will want to use public key authentication with ssh-agent forwarding enabled. That way, you only have to authenticate once.
Scripted SSH with agent forwarding may suffice if all you need to do is check the exit status from your remote commands, but if you're going to do anything complex you might be better off using expect or expect-lite to drive the SSH/SCP sessions in a more flexible way. Expect in particular is designed to be a robust replacement for interactive sessions.
If you stick with shell scripting, and your filenames change a lot, you can always create a wrapper around SSH or SCP like so:
# usage: my_ssh [remote_host] [command_line]
# returns: exit status of remote command, or 255 on SSH error
my_ssh () {
local host="$1"
shift
ssh -A "$host" "$#"
}
Between ssh-agent and the wrapper function, you should have a reasonable starting point for your own efforts.
Another way could be to use rsync, which tomatically creates any needed directories and, if you want, removes the copied source files.
In your case, you could work with the commands
home:~$ ssh system1
system1:~$ ssh system2
system2:~$ rsync -aPSHiv system3:/data /tmp/data
system2:~$ exit
system1:~$ rsync -aPSHiv --remove-source-files system2:/tmp/data /tmp/data
system1:~$ rsync -aPSHiv system2:/data /tmp/data
system1:~$ exit
home:~$ rsync -aPSHiv --remove-source-files system1:/tmp/data /tmp/data
home:~$ rsync -aPSHiv system1:/data /data
If you combine this with Gordon's approach, you can even reduce that to
home:~$ rsync -aPSHiv system1:/data/ system2:/data/ system3:/data/ /data/
Note that rsync makes a difference between ...data and ...data/ - the former means the directory and its contents, the latter just the contents. If you mix them up, you might end up with a directory data in another directory data.
Besides, you simplify things if you work with public SSH keys instead of passwords.