Bash script to Process two files and loop through to have mountpoint check - linux

I have two files on is which contains host-names and another one which contains Linux mount-point information which i'm processing from file mount.txt.
What realy I'm looking for is to login to each hosts and check if the mount-point mentioned in the /tmo/mounts file exits on the hosts if it exits then just do ls -ld mount-point else skip it.
Somehow being a novice I'm not able to get how to process the mount-point check
#!/bin/bash
REMOTE_HOSTS="/tmp/hosts"
REMOTE_MOUNTS="/tmp/mounts"
awk -F":" '{print $1}' mount.txt | sort -u > $REMOTE_HOSTS
awk '{print $3}' mount.txt | sort -u > $REMOTE_MOUNTS
for hosts in $(cat $REMOTE_HOSTS);
do
echo "------ $hosts ----"
ssh -o StrictHostKeyChecking=no -i /home/data/.ans root#$hosts
done
Side-Note: /home/data/.ans is my rsa key for rot login.
Hostname File:
/tmp/hosts
my-hosts01
my-hosts02
Moun-point File :
/tmp/mounts
/data/oracle01
/data/oracle02
/data/oracle03
Please advise and help how could i do that, sorry if i could not make it more readable.

You have to make a difference between a mount-point, which is simply a directory, and a mounted element, which can be a storage or another thing.
Knowing that :
if you want to check the mount-point existence, you simply have to check the directory : Check if a directory exists in a shell script
if you want to check if an element is mounted on the mounted point : Check if directory mounted with bash

Related

Get file count at remote location during FTP in shell script on linux server

Requirement : Need to get file count based on wildcard entry present on remote location(Linux server) and store it in variable for validation purpose
Tried the below code
export ExpectedFileCount=$(ftp -inv $FTPSERVER >> $FTPLOGFILE <<END_SCRIPT
user $FTP_USER $FTP_PASSWORD
passive
cd $PATH
ls -ltr ${WILDCARD}*xml| wc -l | sed 's/ *//g'
quit
END_SCRIPT)
But the code is storing the code snippet in the variable and and executing the commands every time I call the variable.
Please suggest the changes in the script to execute the script once and store the value in the variable
This seems to work (on Ubuntu, no promises about portability):
export ExpectedFileCount=`ftp -in $FTPSERVER << END_SCRIPT | tee -a $FTPLOGFILE | egrep -c '\.xml$'
user $FTP_USER $FTP_PASSWORD
passive
cd $REMOTE_PATH
ls -l
quit
END_SCRIPT`
Issues:
$REMOTE_PATH used in place of $PATH for remote directory (as $PATH has a special meaning)
only a simple ls -l is performed inside the ftp session, and the output parsed locally, as it does not support arbitrary shell commands
I can't see how to capture the output of a command with a heredoc using $(...), but it seems to work with backticks if the closing backtick is after the final delimiter

Detect usb device name automatically on connection

When I am trying to copy directory from Linux home directory to usb drive(pen drive), In the case of following command cp -r /home/directoryname /media/usbname(pendrivename) it is working fine.
But I am looking the command , copy the directory with out giving "usbname(pendrivename)"
Not sure I have understood your demand, but if I have a script like this should do the trick if your USB drive is always monted with the same tag :
#!/bin/bash
cp -r $1 /media/usbname(pendrivename)
If you save the script as ~/cpusb.sh, you can do :
chmod cpusb.sh
echo "alias cpusb=cpusb.sh" >> .bash_aliases
source
and then use cpusb when you want.
I think i'd use mount to textually infer the connected usb's dir using sed or awk.
Maybe even save mount's result when nothing is connected and 'subtract' it from mount's result after connecting a new usb device.
Or even better, run your script before you connect the device:
- the script will run mount every second and will wait for a change in the result.
- when a change is detected, the newly added device is your usb.
Something like:
#!/bin/bash
mount_old="$(mount)"
mount_new="${mount_old}"
while [[ "${mount_new}" == "${mount_old}" ]]; do
sleep 1
mount_new="$(mount)"
done
# getting added line using sort & uniq
sort <(echo "${mount_old}") <(echo "${mount_new}") | uniq -u | awk '{ print $3 }'
# another way to achieve this using diff & grep
# diff <(echo "${mount_old}") <(echo "${mount_new}") | grep ">" | awk '{ print $4 }'
It's merely a sketch, you might need/want to refine it.

Shell script to compare remote directories

I have a shell script that I am using to compare directory contents. The script has to ssh to different servers to get a directory listing. When I run the script below, I am getting the contents of the server that I am logged into's /tmp directory listing and not that of the servers I am trying to ssh to. Could you please tell me what I am doing wrong?
The config file used in the script is as follows (called config.txt):
server1,server2,/tmp
The script is as follows
#!/bin/sh
CONFIGFILE="config.txt"
IFS=","
while read a b c
do
SERVER1=$a
SERVER2=$b
COMPDIR=$c
`ssh user#$SERVER1 'ls -l $COMPDIR'`| sed -n '1!p' >> server1.txt
`ssh user#$SERVER2 'ls -l $COMPDIR'`| sed -n '1!p' >> server2.txt
done < $CONFIGFILE
When I look at the outputs of server1.txt and server2.txt, they are both exactly the same - having the contents of /tmp of the server the script is running on (not server1 or 2). Doing the ssh +dir listing on command line works just fine. I am also getting the error "Pseudo-terminal will not be allocated because stdin is not a terminal". Adding the -t -t to the ssh command isnt helping either
Thank you
I have the back ticks in order to execute the command.
Backticks are not needed to execute a command - they are used to expand the standard output of the command into the command line. Certainly you don't want the output of your ssh commands to be interpreted as commands. Thus, it should work fine without the backticks:
ssh user#$SERVER1 "ls -l $COMPDIR" | sed -n '1!p' >>server1.txt
ssh user#$SERVER2 "ls -l $COMPDIR" | sed -n '1!p' >>server2.txt
(provided that double quotes to allow expansion of $COMPDIR are used).
first you need to generate keys to login to remote without keys
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub remote-host
then try to ssh without pass
ssh remote-host
then try to invoke in your script but first make sanity check
var1=$(ssh remote-host) die "Cannot connect to remote host" unless $var1;

Check if directory mounted with bash

I am using
mount -o bind /some/directory/here /foo/bar
I want to check /foo/bar though with a bash script, and see if its been mounted? If not, then call the above mount command, else do something else. How can I do this?
CentOS is the operating system.
You didn't bother to mention an O/S.
Ubuntu Linux 11.10 (and probably most up-to-date flavors of Linux) have the mountpoint command.
Here's an example on one of my servers:
$ mountpoint /oracle
/oracle is a mountpoint
$ mountpoint /bin
/bin is not a mountpoint
Actually, in your case, you should be able to use the -q option, like this:
mountpoint -q /foo/bar || mount -o bind /some/directory/here /foo/bar
Running the mount command without arguments will tell you the current mounts. From a shell script, you can check for the mount point with grep and an if-statement:
if mount | grep /mnt/md0 > /dev/null; then
echo "yay"
else
echo "nay"
fi
In my example, the if-statement is checking the exit code of grep, which indicates if there was a match. Since I don't want the output to be displayed when there is a match, I'm redirecting it to /dev/null.
The manual of mountpoint says that it:
checks whether the given directory or file is mentioned in the /proc/self/mountinfo file.
The manual of mount says that:
The listing mode is maintained for backward compatibility only. For
more robust and customizable output use findmnt(8), especially in your
scripts.
So the correct command to use is findmnt, which is itself part of the util-linux package and, according to the manual:
is able to search in /etc/fstab, /etc/mtab or /proc/self/mountinfo
So it actually searches more things than mountpoint. It also provides the convenient option:
-M, --mountpoint path
Explicitly define the mountpoint file or directory. See also --target.
In summary, to check whether a directory is mounted with bash, you can use:
if [[ $(findmnt -M "$FOLDER") ]]; then
echo "Mounted"
else
echo "Not mounted"
fi
Example:
mkdir -p /tmp/foo/{a,b}
cd /tmp/foo
sudo mount -o bind a b
touch a/file
ls b/ # should show file
rm -f b/file
ls a/ # should show nothing
[[ $(findmnt -M b) ]] && echo "Mounted"
sudo umount b
[[ $(findmnt -M b) ]] || echo "Unmounted"
My solution:
is_mount() {
path=$(readlink -f $1)
grep -q "$path" /proc/mounts
}
Example:
is_mount /path/to/var/run/mydir/ || mount --bind /var/run/mydir/ /path/to/var/run/mydir/
For Mark J. Bobak's answer, mountpoint not work if mount with bind option in different filesystem.
For Christopher Neylan's answer, it's not need to redirect grep's output to /dev/null, just use grep -q instead.
The most important, canonicalize the path by using readlink -f $mypath:
If you check path such as /path/to/dir/ end with backslash, the path in /proc/mounts or mount output is /path/to/dir
In most linux release, /var/run/ is the symlink of /run/, so if you mount bind for /var/run/mypath and check if it mounted, it will display as /run/mypath in /proc/mounts.
I like the answers that use /proc/mounts, but I don't like doing a simple grep. That can give you false positives. What you really want to know is "do any of the rows have this exact string for field number 2". So, ask that question. (in this case I'm checking /opt)
awk -v status=1 '$2 == "/opt" {status=0} END {exit status}' /proc/mounts
# and you can use it in and if like so:
if awk -v status=1 '$2 == "/opt" {status=0} END {exit status}' /proc/mounts; then
echo "yes"
else
echo "no"
fi
The answers here are too complicated just check if the mount exists using:
cat /proc/mounts | tail -n 1
This only outputs the last mounted folder, if you want to see all of them just remove the tail command.
Another clean solution is like that:
$ mount | grep /dev/sdb1 > /dev/null && echo mounted || echo unmounted
For sure, 'echo something' can be substituted by whatever you need to do for each case.
In my .bashrc, I made the following alias:
alias disk-list="sudo fdisk -l"

Linux script for copying files from multiple windows machines

Having a issue trying to make a bash script that will read ip address and usernames from a file that mount a connection to that share on windows and then copy ano file types into a new folder called the users name.
At the moment it doesn't quite work, it makes hundreds of folders called *.ano if it can not find the windows share.
Help please
Text file:
192.168.0.2 user1
192.168.0.3 user2
bash script:
USER='/home/user/user.ip'
IPADDY=$(grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' $USER)
USERNAME=$(awk '{ print $NF }' $USER)
for i in $IPADDY $USERNAME
do
mkdir /home/user/Documents/$USERNAME
mount -t smbfs //$IPADDY/$USERNAME /home/user/$USERNAME
rsync -va /home/user/$USERNAME/*.ano /home/user/Documents/$USERNAME/*.ano
done
Hi all thanks for such a quick reply, I have change the code as follow but still get multiple files have I done something wrong here
USER='/home/user/user.ip'
IPADDY=$(grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' $USER)
USERNAME=$(awk '{ print $NF }' $USER)
while read IPADDY USERNAME; do
mkdir /home/user/Documents/$USERNAME
mount -t smbfs //$IPADDY/$USERNAME /home/user/$USERNAME
rsync -va /home/user/$USERNAME/*.ano /home/user/Documents/$USERNAME/
done < $USER
The problem is in the for command. In your script, i iterates over the contents of $IPADDY, then it iterates over the contents of $USERNAME. Meanwhile, $USERNAME inside the loop gets expanded to user1 user2, resulting in:
mkdir /home/user/Documents/user1 user2
The mount line becomes:
mount -t smbfs //192.168.0.2 192.168.0.3/user1 user2 /home/user/user1 user2
And so on.
Rather, loop over the file itself:
while read IPADDY USERNAME; do
#awesome things here based on $IPADDY and $USERNAME
done < $USER
You might want to add [[ -z $IPADDY ]] && continue to skip over any possible blank lines in the file.
One problem is that you use a wildcard (*) for the destination files. But those files don't exist - therefore /home/user/Documents/$USERNAME/*.ano cannot match and rsync will create a file *.log.
Better do:
rsync -va /home/user/$USERNAME/*.ano /home/user/Documents/$USERNAME/

Resources