script to copy, install and execute on multiple hosts - linux

I am trying to copy few files into multiple hosts, install/configure those on each with running specific commands depending on OS type. The IP addresses for each host are read from host.txt file.
It appears when I run the script, it does not execute on the remote hosts. Can someone help identify the issues with this script? Sorry for this basic one, I am quite new into shell scripting.
#!/bin/bash
export AGENT=agent-x86-64-linux-5.8.1.tar.gz
export AGENT_PROPERTIES_NONDMZ=agent.properties.nondmz
export agent_INIT=agent.sh
echo "####Installing hqagent####"
while read host; do
scp $AGENT $AGENT_PROPERTIES_NONDMZ $agent_INIT root#$host:/opt
if ssh -n root#$host '[ "$(awk "/CentOS/{print}" /etc/*release)" ] '
then
cd /opt
tar -xvzf $AGENT
mv -f /opt/agent.properties.nondmz /opt/agent-5.8.1/conf/agent.properties
mkdir /opt/hqagent/
ln -s /opt/agent-5.8.1/ /opt/hqagent/agent-current
useradd hqagent
groupadd hqagent
chown -R hqagent:hqagent /opt/hqagent /opt/agent-5.8.1/
cd /etc/init.d
chmod 755 hqagent.sh
chkconfig --add hqagent.sh
su - hqagent
/opt/agent-5.8.1/bin/hq-agent.sh start
else
cd /opt
tar -xvzf $AGENT
mv -f /opt/agent.properties.nondmz /opt/agent-5.8.1/conf/agent.properties
rm -rf /opt/hqagent.sh
mkdir /opt/hqagent/
ln -s /opt/agent-5.8.1/ /opt/hqagent/agent-current
useradd hqagent
groupadd hqagent
chown -R hqagent:hqagent /opt/hqagent /opt/agent-5.8.1
cd /etc/init.d
ln -s /opt/hqagent/agent-current/bin/hq-agent.sh hqagent.sh
cd /etc/init.d/rc3.d/
ln -s /etc/init.d/hqagent.sh S99hqagent
ln -s /etc/init.d/hqagent.sh K01hqagent
cd ../rc5.d
ln -s /etc/init.d/hqagent.sh S99hqagent
ln -s /etc/init.d/hqagent.sh K01hqagent
chkconfig --add hqagent.sh
su - hqagent
/opt/agent-5.8.1/bin/hq-agent.sh start
fi
done < hosts.txt
error:
tar (child): agent-x86-64-linux-5.8.1.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
mv: cannot stat `/opt/agent.properties.nondmz': No such file or directory
mkdir: cannot create directory `/opt/hqagent/': File exists
ln: creating symbolic link `/opt/hqagent/agent-current': File exists
useradd: user 'hqagent' already exists
groupadd: group 'hqagent' already exists
chown: cannot access `/opt/agent-5.8.1/': No such file or directory
chmod: cannot access `hqagent.sh': No such file or directory
error reading information on service hqagent.sh: No such file or directory
-bash: line 1: 10.145.34.6: command not found
-bash: line 2: 10.145.6.10: command not found
./hq-install.sh: line 29: /opt/agent-5.8.1/bin/hq-agent.sh: No such file or directory

It appears that the problem is that you run this script on the "master" server, but somehow expect the branches of your if-statement to be run on the remote hosts. You need to factor those branches out into their own files, copy them to the remote hosts along with the other files, and in your if-statement, each branch should just be a ssh command to the remote host, triggering the script you copied over.
So your master script would look something like:
#!/bin/bash
export AGENT=agent-x86-64-linux-5.8.1.tar.gz
export AGENT_PROPERTIES_NONDMZ=agent.properties.nondmz
export agent_INIT=agent.sh
# Scripts containing the stuff you want done on the remote hosts
centos_setup=centos_setup.sh
other_setup=other_setup.sh
echo "####Installing hqagent####"
while read host; do
echo " ++ Copying files to $host"
scp $AGENT $AGENT_PROPERTIES_NONDMZ $agent_INIT root#$host:/opt
echo -n " ++ Running remote part on $host "
if ssh -n root#$host '[ "$(awk "/CentOS/{print}" /etc/*release)" ] '
then
echo "(centos)"
scp $centos_setup root#$host:/opt
ssh root#host "/opt/$centos_setup"
else
echo "(generic)"
scp $other_setup root#$host:/opt
ssh root#host "/opt/$other_setup"
fi
done < hosts.txt
The contents of the two auxiliary scrips would be the current contents of the if-branches in your original.

Related

Limited a user with creating rbash, exporting the path in .bashrc but /bin/ls still works

I tried limiting ls command to a specific user. It works, but when I execute /bin/ls, it executes successfully again, how to restrict here.
useradd -m $username -s /bin/rbash
echo "$username:$password" | chpasswd
mkdir /home/$username/bin
chmod 755 /home/$username/bin
echo "PATH=$HOME/bin" >> /home/$username/.bashrc
echo "export PATH" >> /home/$username/.bashrc
ln -s /bin/ls /home/$username/bin/

Unable to run shell script in crontab

I am unable to make a script execute successfully from crontab.
When the script is executed manually, it works fine. When added to the crontab it gives errors.
When the script is executed manually as follows it all works fine:
cd /home/admin/git/Repo
./lunchpad2.sh
The script is added to crontab as follows:
sudo crontab -e
30 13 * * * /home/admin/git/Repo/lunchpad2.sh > /home/admin/git/Repo/outcome.err
lunchpad2.sh has 744 permissions set;
The script itself:
#!/bin/bash -p
PATH=$PATH:/home/admin/git/Repo
echo "--> Starting!"
echo "--> Stopping docker"
docker-compose down
echo "--> Switching files"
mv dc_conf_standby.py dc_conf_aboutready.py
mv dc_conf.py dc_conf_standby.py
mv dc_conf_aboutready.py dc_conf.py
echo "--> Building docker"
docker-compose up -d --build
echo "--> Completed!"
The errors that are generated:
/home/admin/git/Repo/lunchpad2.sh: line 7: docker-compose: command not found
mv: cannot stat ‘dc_conf_standby.py’: No such file or directory
mv: cannot stat ‘dc_conf.py’: No such file or directory
mv: cannot stat ‘dc_conf_aboutready.py’: No such file or directory
/home/admin/git/Repo/lunchpad2.sh: line 15: docker-compose: command not found
I see two issues here:
You need to either cd in the script or in the cron job. Cron runs the command in your home directory. You can echo "$PWD" to confirm.
You need to specify docker-compose executable path (Run "which docker-compose" to confirm)
#!/bin/bash -p
cd /home/admin/git/Repo
echo "--> Starting!"
echo "--> Stopping docker"
/usr/bin/docker-compose down
echo "--> Switching files"
mv dc_conf_standby.py dc_conf_aboutready.py
mv dc_conf.py dc_conf_standby.py
mv dc_conf_aboutready.py dc_conf.py
echo "--> Building docker"
/usr/bin/docker-compose up -d --build
echo "--> Completed!"

Generating ssh-key file for multiple users on each server

I have to create 60 ssh users on one of the servers. I created users using small user creation script which loops though each users from the user list.
I'm trying to run the similar script which will generate sshkeys for each user.
#!/bin/sh
for u in `cat sshusers.txt
do
echo $u
sudo su - $u
mkdir .ssh; chmod 700 .ssh; cd .ssh; ssh-keygen -f id_rsa -t rsa -N '';
chmod 600 /home/$u/.ssh/*;
cp id_rsa.pub authorized_keys
done
when i run this script, it basically logs into all 60 users account but does not create. ssh dir or generate passwordless ssh.key. Any idea to resolve this would be greatly appreciated!
Thanks
sudo su - $u starts a new shell; the commands that follow aren't run until that shell exits. Instead, you need to run the commands with a single shell started by sudo.
while IFS= read -r u; do
sudo -u "$u" sh -c "
mkdir .ssh
chmod 700 .ssh
cd .ssh
ssh-keygen -f id_rsa -t rsa -N ''
chmod 600 '/home/$u/.ssh/'*
cp id_rsa.pub authorized_keys
"
done < sshusers.txt
After trying several times, i made little changes that seems to work now.
#!/bin/bash
for u in `more sshuser.txt`
do
echo $u
sudo su - "$u" sh -c "
mkdir .ssh
chmod 700 .ssh
cd .ssh
ssh-keygen -f id_rsa -t rsa -N ''
chmod 600 '/home/$u/.ssh/'*
cp id_rsa.pub authorized_keys "
done

How to execute multiple commands on machineB from machineA efficiently?

I have a tar (abc.tar.gz) file in my machineA which I need to copy and untar it in my machineB. In my machineB, I have an application running which is part of that tar file so I need to write shell script which will do these below things on machineB by running the shell script on machineA -
execute command sudo stop test_server on machineB
delete everything inside /opt/data/ folder leaving that tar file.
copy that tar file in machineB inside this location /opt/data/
now untar that tar file in /opt/data/ folder.
now execute this command sudo chown -R posture /opt/data
after that execute this command to start the server sudo start test_server
In machineA, I will have my shell script and tar file in the same location. As of now I have got below shell script by reading the tutorials as I recently started working with shell script.
#!/bin/bash
set -e
readonly SOFTWARE_INSTALL_LOCATION=/opt/data
readonly MACHINE_LOCATION=(machineB) # I will have more machines here in future.
readonly TAR_FILE_NAME=abc.tar.gz
ssh david#${MACHINE_LOCATION[0]} 'sudo stop test_server'
ssh david#${MACHINE_LOCATION[0]} 'sudo rm -rf /opt/data/*'
sudo scp TAR_FILE_NAME david#${MACHINE_LOCATION[0]}:$SOFTWARE_INSTALL_LOCATION/
ssh david#${MACHINE_LOCATION[0]} 'tar -xvzf /opt/data/abc.tar.gz'
ssh david#${MACHINE_LOCATION[0]} 'sudo chown -R posture /opt/data'
ssh david#${MACHINE_LOCATION[0]} 'sudo start test_server'
Did I got everything right or we can improve anything in the above shell script? I need to run the above shell script in our production machine.
Is it possible to show the status message for each step, saying this step got executed successfully so moving to next step otherwise terminate out of the shell script with the proper error message?
You can do it all in a single ssh round-trip, and the tarball doesn't need to be stored at the remote location at all.
#!/bin/bash
set -e
readonly SOFTWARE_INSTALL_LOCATION=/opt/data
readonly MACHINE_LOCATION=(machineB)
readonly TAR_FILE_NAME=abc.tar.gz
ssh david#${MACHINE_LOCATION[0]} "
set -e
sudo stop test_server
sudo rm -rf '$SOFTWARE_INSTALL_LOCATION'/*
tar -C '$SOFTWARE_INSTALL_LOCATION' -x -v -z -f -
sudo chown -R posture '$SOFTWARE_INSTALL_LOCATION'
sudo start test_server" <"$TAR_FILE_NAME"
This also fixes the bugs that the tarball would be extracted into your home directory (not /opt/data) and you lacked a dollar sign on the interpolation of the variable TAR_FILE_NAME.
I put the remote script in double quotes so that you can use $SOFTWARE_INSTALL_LOCATION inside the script; notice how the interpolated value is interpolated in single quotes for the remote shell (this won't work if the value contains single quotes, of course).
You could perhaps avoid the chown if you could run the tar command as user posture directly.
Adding echo prompts to show progress is a trivial exercise.
Things will be a lot smoother if you have everything -- both ssh and sudo -- set up for passwordless access, but I believe this should work even if you do get a password prompt or two.
(If you do want to store the tarball remotely, just add a tee before the tar.)
You can generate status message for each step using exit status of ssh command like I tried with first two ssh command.
#!/bin/bash
set -e
readonly SOFTWARE_INSTALL_LOCATION=/opt/data
readonly MACHINE_LOCATION=(machineB) # I will have more machines here in future.
readonly TAR_FILE_NAME=abc.tar.gz
# An error exit function
function error_exit
{
echo "$1" 1>&2
exit 1
}
if ssh david#${MACHINE_LOCATION[0]} 'sudo stop test_server'; then
echo "`date -u` INFO : Stopping test_Server" 1>&2
else
error_exit "`date -u` ERROR : Unable Stop test_Server! Aborting."
fi
if ssh david#${MACHINE_LOCATION[0]} 'sudo rm -rf /opt/data/*'; then
echo "`date -u` INFO : Removing /opt/data" 1>&2
else
error_exit "`date -u` ERROR : Unable to remove! Aborting."
fi
sudo scp TAR_FILE_NAME david#${MACHINE_LOCATION[0]}:$SOFTWARE_INSTALL_LOCATION/
ssh david#${MACHINE_LOCATION[0]} 'tar -xvzf /opt/data/abc.tar.gz'
ssh david#${MACHINE_LOCATION[0]} 'sudo chown -R posture /opt/data'
ssh david#${MACHINE_LOCATION[0]} 'sudo start test_server'
Password management you can create ssh pass-wordless authentication.
http://www.linuxproblem.org/art_9.html

rsync bash script duplicating dir structure?

I have a script attaching an Amazon S3 bucket as a mount point on my CentOS 6.5 machine
I am attempting to utilize rsync to copy files from 2 locations to the bucket
Code:
#!/bin/bash
# SET THE BUCKET NAME HERE
S3_BUCKET="my-bucketname";
# SET THE MOUNT POINT HERE
MNT_POINT="/mnt/my-mountpoint";
# Create the mount point if it does not already exist, and set permissions on it
if [[ ! -e $MNT_POINT ]]; then
mkdir $MNT_POINT;
chmod -R 0777 $MNT_POINT;
fi;
# Mount the bucket
riofs -c ~/.config/riofs/riofs.conf.xml -o rw,allow_other,umask=2777,uid=1000,gid=1000 --cache-dir=/tmp/cache $S3_BUCKET $MNT_POINT;
mkdir $MNT_POINT/home;
mkdir $MNT_POINT/mysqlbak;
# Copy all "User" directories, except those owned by root
for filename in /home/* ; do
# Get the owner of $filename.
ACCT=$(stat -c '%U' "$filename");
# If the file is a directory NOT owned by root, run backup.
if [ -d "$filename" -a "$ACCT" != "root" ]; then
# Rsync to the mount
rsync -a /home/$filename $MNT_POINT/home;
fi;
done;
# Copy all mysql backups
for mysqlbak in /mysqlbak/* ; do
# Rsync to the mount
rsync -a /mysqlbak/$mysqlbak $MNT_POINT/mysqlbak;
done;
# No need to keep it mounted
umount $MNT_POINT;
as you can see I am attempting to keep a backup of /mysqlbackup folder's contents, and the /home's contents (minus anything attached to the root account)
The issue is, when I run this script on my server I am getting the following errors:
rsync: change_dir "/home//home" failed: No such file or directory (2)
rsync error: some files/attrs were not transfered (see previous errors) (code 23) at main.c(1039) [sender=3.0.6]
and
rsync: change_dir "/mysqlbak//mysqlbak" failed: No such file or directory (2)
rsync error: some files/attrs were not transfered (see previous errors) (code 23) at main.c(1039) [sender=3.0.6]
I can assure you that /home and /mysqlbak both exist.
How can I fix this so it properly syncs up to this mount?
/home and /mysqlbak are not created in the bucket
Replace
rsync -a /home/$filename $MNT_POINT/home;
by
rsync -a $filename $MNT_POINT/home;
and replace
rsync -a /mysqlbak/$mysqlbak $MNT_POINT/mysqlbak;
by
rsync -a $mysqlbak $MNT_POINT/mysqlbak;

Resources