How to execute multiple commands on machineB from machineA efficiently? - linux

I have a tar (abc.tar.gz) file in my machineA which I need to copy and untar it in my machineB. In my machineB, I have an application running which is part of that tar file so I need to write shell script which will do these below things on machineB by running the shell script on machineA -
execute command sudo stop test_server on machineB
delete everything inside /opt/data/ folder leaving that tar file.
copy that tar file in machineB inside this location /opt/data/
now untar that tar file in /opt/data/ folder.
now execute this command sudo chown -R posture /opt/data
after that execute this command to start the server sudo start test_server
In machineA, I will have my shell script and tar file in the same location. As of now I have got below shell script by reading the tutorials as I recently started working with shell script.
#!/bin/bash
set -e
readonly SOFTWARE_INSTALL_LOCATION=/opt/data
readonly MACHINE_LOCATION=(machineB) # I will have more machines here in future.
readonly TAR_FILE_NAME=abc.tar.gz
ssh david#${MACHINE_LOCATION[0]} 'sudo stop test_server'
ssh david#${MACHINE_LOCATION[0]} 'sudo rm -rf /opt/data/*'
sudo scp TAR_FILE_NAME david#${MACHINE_LOCATION[0]}:$SOFTWARE_INSTALL_LOCATION/
ssh david#${MACHINE_LOCATION[0]} 'tar -xvzf /opt/data/abc.tar.gz'
ssh david#${MACHINE_LOCATION[0]} 'sudo chown -R posture /opt/data'
ssh david#${MACHINE_LOCATION[0]} 'sudo start test_server'
Did I got everything right or we can improve anything in the above shell script? I need to run the above shell script in our production machine.
Is it possible to show the status message for each step, saying this step got executed successfully so moving to next step otherwise terminate out of the shell script with the proper error message?

You can do it all in a single ssh round-trip, and the tarball doesn't need to be stored at the remote location at all.
#!/bin/bash
set -e
readonly SOFTWARE_INSTALL_LOCATION=/opt/data
readonly MACHINE_LOCATION=(machineB)
readonly TAR_FILE_NAME=abc.tar.gz
ssh david#${MACHINE_LOCATION[0]} "
set -e
sudo stop test_server
sudo rm -rf '$SOFTWARE_INSTALL_LOCATION'/*
tar -C '$SOFTWARE_INSTALL_LOCATION' -x -v -z -f -
sudo chown -R posture '$SOFTWARE_INSTALL_LOCATION'
sudo start test_server" <"$TAR_FILE_NAME"
This also fixes the bugs that the tarball would be extracted into your home directory (not /opt/data) and you lacked a dollar sign on the interpolation of the variable TAR_FILE_NAME.
I put the remote script in double quotes so that you can use $SOFTWARE_INSTALL_LOCATION inside the script; notice how the interpolated value is interpolated in single quotes for the remote shell (this won't work if the value contains single quotes, of course).
You could perhaps avoid the chown if you could run the tar command as user posture directly.
Adding echo prompts to show progress is a trivial exercise.
Things will be a lot smoother if you have everything -- both ssh and sudo -- set up for passwordless access, but I believe this should work even if you do get a password prompt or two.
(If you do want to store the tarball remotely, just add a tee before the tar.)

You can generate status message for each step using exit status of ssh command like I tried with first two ssh command.
#!/bin/bash
set -e
readonly SOFTWARE_INSTALL_LOCATION=/opt/data
readonly MACHINE_LOCATION=(machineB) # I will have more machines here in future.
readonly TAR_FILE_NAME=abc.tar.gz
# An error exit function
function error_exit
{
echo "$1" 1>&2
exit 1
}
if ssh david#${MACHINE_LOCATION[0]} 'sudo stop test_server'; then
echo "`date -u` INFO : Stopping test_Server" 1>&2
else
error_exit "`date -u` ERROR : Unable Stop test_Server! Aborting."
fi
if ssh david#${MACHINE_LOCATION[0]} 'sudo rm -rf /opt/data/*'; then
echo "`date -u` INFO : Removing /opt/data" 1>&2
else
error_exit "`date -u` ERROR : Unable to remove! Aborting."
fi
sudo scp TAR_FILE_NAME david#${MACHINE_LOCATION[0]}:$SOFTWARE_INSTALL_LOCATION/
ssh david#${MACHINE_LOCATION[0]} 'tar -xvzf /opt/data/abc.tar.gz'
ssh david#${MACHINE_LOCATION[0]} 'sudo chown -R posture /opt/data'
ssh david#${MACHINE_LOCATION[0]} 'sudo start test_server'
Password management you can create ssh pass-wordless authentication.
http://www.linuxproblem.org/art_9.html

Related

Unix: 'su user' not working and remains root inside SSH if condition [duplicate]

I've written a script that takes, as an argument, a string that is a concatenation of a username and a project. The script is supposed to switch (su) to the username, cd to a specific directory based upon the project string.
I basically want to do:
su $USERNAME;
cd /home/$USERNAME/$PROJECT;
svn update;
The problem is that once I do an su... it just waits there. Which makes sense since the flow of execution has passed to switching to the user. Once I exit, then the rest of the things execute but it doesn't work as desired.
I prepended su to the svn command but the command failed (i.e. it didn't update svn in the directory desired).
How do I write a script that allows the user to switch user and invoke svn (among other things)?
Much simpler: use sudo to run a shell and use a heredoc to feed it commands.
#!/usr/bin/env bash
whoami
sudo -i -u someuser bash << EOF
echo "In"
whoami
EOF
echo "Out"
whoami
(answer originally on SuperUser)
The trick is to use "sudo" command instead of "su"
You may need to add this
username1 ALL=(username2) NOPASSWD: /path/to/svn
to your /etc/sudoers file
and change your script to:
sudo -u username2 -H sh -c "cd /home/$USERNAME/$PROJECT; svn update"
Where username2 is the user you want to run the SVN command as and username1 is the user running the script.
If you need multiple users to run this script, use a %groupname instead of the username1
You need to execute all the different-user commands as their own script. If it's just one, or a few commands, then inline should work. If it's lots of commands then it's probably best to move them to their own file.
su -c "cd /home/$USERNAME/$PROJECT ; svn update" -m "$USERNAME"
Here is yet another approach, which was more convenient in my case (I just wanted to drop root privileges and do the rest of my script from restricted user): you can make the script restart itself from the correct user. This approach is more readable than using sudo or su -c with a "nested script". Let's suppose it is started as root initially. Then the code will look like this:
#!/bin/bash
if [ $UID -eq 0 ]; then
user=$1
dir=$2
shift 2 # if you need some other parameters
cd "$dir"
exec su "$user" "$0" -- "$#"
# nothing will be executed beyond that line,
# because exec replaces running process with the new one
fi
echo "This will be run from user $UID"
...
Use a script like the following to execute the rest or part of the script under another user:
#!/bin/sh
id
exec sudo -u transmission /bin/sh - << eof
id
eof
Use sudo instead
EDIT: As Douglas pointed out, you can not use cd in sudo since it is not an external command. You have to run the commands in a subshell to make the cd work.
sudo -u $USERNAME -H sh -c "cd ~/$PROJECT; svn update"
sudo -u $USERNAME -H cd ~/$PROJECT
sudo -u $USERNAME svn update
You may be asked to input that user's password, but only once.
It's not possible to change user within a shell script. Workarounds using sudo described in other answers are probably your best bet.
If you're mad enough to run perl scripts as root, you can do this with the $< $( $> $) variables which hold real/effective uid/gid, e.g.:
#!/usr/bin/perl -w
$user = shift;
if (!$<) {
$> = getpwnam $user;
$) = getgrnam $user;
} else {
die 'must be root to change uid';
}
system('whoami');
This worked for me
I split out my "provisioning" from my "startup".
# Configure everything else ready to run
config.vm.provision :shell, path: "provision.sh"
config.vm.provision :shell, path: "start_env.sh", run: "always"
then in my start_env.sh
#!/usr/bin/env bash
echo "Starting Server Env"
#java -jar /usr/lib/node_modules/selenium-server-standalone-jar/jar/selenium-server-standalone-2.40.0.jar &
#(cd /vagrant_projects/myproj && sudo -u vagrant -H sh -c "nohup npm install 0<&- &>/dev/null &;bower install 0<&- &>/dev/null &")
cd /vagrant_projects/myproj
nohup grunt connect:server:keepalive 0<&- &>/dev/null &
nohup apimocker -c /vagrant_projects/myproj/mock_api_data/config.json 0<&- &>/dev/null &
Inspired by the idea from #MarSoft but I changed the lines like the following:
USERNAME='desireduser'
COMMAND=$0
COMMANDARGS="$(printf " %q" "${#}")"
if [ $(whoami) != "$USERNAME" ]; then
exec sudo -E su $USERNAME -c "/usr/bin/bash -l $COMMAND $COMMANDARGS"
exit
fi
I have used sudo to allow a password less execution of the script. If you want to enter a password for the user, remove the sudo. If you do not need the environment variables, remove -E from sudo.
The /usr/bin/bash -l ensures, that the profile.d scripts are executed for an initialized environment.

Retain root privileges during long processes

I have a bash script that makes a backup of my data files (~50GB). The script is basically something like this:
sudo tar /backup/mydata1 into old-backup-1.tar
sudo tar /backup/mydata2 into old-backup-2.tar
sudo rsync /mydata1 to /backup/mydata1
sudo rsync /mydata2 to /backup/mydata2
(I use sudo because some of the files are owned by root).
The problem is that after every command (because it takes a long time) I loose root privileges and if I'm not present at the computer then the su prompt gets timed out and the script ends in the middle of the job.
Is there a way to retain su privileges during the entire script? What is the best way to approach this situation? I prefer to run the script under my user.
With a second shell:
sudo bash -c "command1; command2; command3; command4"
Perhaps like this:
#!/bin/bash -eu
exec sudo /bin/bash <<'EOF'
echo I am $UID
whoami
#^the script
EOF
Alternatively, you could put something like:
if ! [ $(id -u) -eq 0 ]; then
exec sudo "$0" "$#"
fi
at the top.

Commands don't echo after sudo as another user

I have a single command to ssh to a remote linux host and execute a shell script.
ssh -t -t $USER#somehost 'bash -s' < ./deploy.sh
Inside deploy.sh I have this:
#!/bin/bash
whoami; # I see this command echo
sudo -i -u someoneelse #I see this command echo
whoami; # I DON'T see this command echo, but response is correct
#subsequent commands don't echo
When I run the deploy.sh script locally all commands echo.
How do I get commands to echo after I sudo as another user over ssh?
Had to set -x AFTER sudo as another user
#!/bin/bash
whoami;
sudo -i -u someonelese
set -x #make sure echo on
whoami; #command echoed

Run command as root within shell script

I'm working on a script that will shred a usb drive and install Kali linux with encrypted persistent data.
#! /bin/bash
cd ~/Documents/Other/ISOs/Kali
echo "/dev/sdx x=?"
read x
echo "how many passes to wipe? 1 will be sufficient."
read n
echo "sd$x will be wiped $n times."
read -p "do you want to continue? [y/N] " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]
then
exit 1
fi
echo "Your role in the installation process is not over. You will be prompted to type YES and a passphrase."
sudo shred -vz --iterations=$n /dev/sd$x
echo "Wiped. Installing Kali"
sudo dd if=kali-linux-2.0-amd64.iso of=/dev/sd$x bs=512k
echo "Installed. Making persistence."
y=3
sudo parted /dev/sd$x mkpart primary 3.5GiB 100%
x=$x$y
sudo cryptsetup --verbose --verify-passphrase luksFormat /dev/sd$x
sudo cryptsetup luksOpen /dev/sd$x my_usb
sudo mkfs.ext3 -L persistence /dev/mapper/my_usb
sudo e2label /dev/mapper/my_usb persistence
sudo mkdir -p /mnt/my_usb
sudo mount /dev/mapper/my_usb /mnt/my_usb
sudo -i
echo "/ union" > /mnt/my_usb/persistence.conf
umount /dev/mapper/my_usb
cryptsetup luksClose /dev/mapper/my_usb
echo "Persistence complete. Installation complete."
It works nearly perfectly. These commands individually entered into the terminal will create the desired effect, but the problem comes in at line 37:
sudo echo "/ union" > /mnt/my_usb/persistence.conf
That command won't work unless I'm logged in as root user. To solve this I tried adding the sudo -i command before, but once I do that all of the following commands are skipped.
It's okay if the solution suggested requires me to type in the password. I don't want the password stored in the script, that's just wreckless.
Side note, I didn't make a generic form for this question because I want other people to be able use this if they like it.
The problem is that the echo runs with root privilege but the redirection happens in the original shell as the non-root user. Instead, try running an explicit sh under sudo and do the redirection in there
sudo /bin/sh -c 'echo "/ union" > /mnt/my_usb/persistence.conf'
The problem is that when you type in the following command:
sudo echo "/ union" > /mnt/my_usb/persistence.conf
Only the "echo" will be run as root through sudo, but the redirection to the file using > will still be executed as the "normal" user, because it is not a command but something performed directly by the shell.
My usual solution is to use teeso that it runs as a command and not as a shell built-in operation, like this:
echo "/ union" | sudo tee /mnt/my_usb/persistence.conf >/dev/null
Now the tee command will be run as root through sudo and will be allowed to write to the file. >/dev/null is just added to keep the output of the script clean. If you ever want to append instead of overwrite (e.g. you would be using >>normally), then use tee -a.

script to copy, install and execute on multiple hosts

I am trying to copy few files into multiple hosts, install/configure those on each with running specific commands depending on OS type. The IP addresses for each host are read from host.txt file.
It appears when I run the script, it does not execute on the remote hosts. Can someone help identify the issues with this script? Sorry for this basic one, I am quite new into shell scripting.
#!/bin/bash
export AGENT=agent-x86-64-linux-5.8.1.tar.gz
export AGENT_PROPERTIES_NONDMZ=agent.properties.nondmz
export agent_INIT=agent.sh
echo "####Installing hqagent####"
while read host; do
scp $AGENT $AGENT_PROPERTIES_NONDMZ $agent_INIT root#$host:/opt
if ssh -n root#$host '[ "$(awk "/CentOS/{print}" /etc/*release)" ] '
then
cd /opt
tar -xvzf $AGENT
mv -f /opt/agent.properties.nondmz /opt/agent-5.8.1/conf/agent.properties
mkdir /opt/hqagent/
ln -s /opt/agent-5.8.1/ /opt/hqagent/agent-current
useradd hqagent
groupadd hqagent
chown -R hqagent:hqagent /opt/hqagent /opt/agent-5.8.1/
cd /etc/init.d
chmod 755 hqagent.sh
chkconfig --add hqagent.sh
su - hqagent
/opt/agent-5.8.1/bin/hq-agent.sh start
else
cd /opt
tar -xvzf $AGENT
mv -f /opt/agent.properties.nondmz /opt/agent-5.8.1/conf/agent.properties
rm -rf /opt/hqagent.sh
mkdir /opt/hqagent/
ln -s /opt/agent-5.8.1/ /opt/hqagent/agent-current
useradd hqagent
groupadd hqagent
chown -R hqagent:hqagent /opt/hqagent /opt/agent-5.8.1
cd /etc/init.d
ln -s /opt/hqagent/agent-current/bin/hq-agent.sh hqagent.sh
cd /etc/init.d/rc3.d/
ln -s /etc/init.d/hqagent.sh S99hqagent
ln -s /etc/init.d/hqagent.sh K01hqagent
cd ../rc5.d
ln -s /etc/init.d/hqagent.sh S99hqagent
ln -s /etc/init.d/hqagent.sh K01hqagent
chkconfig --add hqagent.sh
su - hqagent
/opt/agent-5.8.1/bin/hq-agent.sh start
fi
done < hosts.txt
error:
tar (child): agent-x86-64-linux-5.8.1.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
mv: cannot stat `/opt/agent.properties.nondmz': No such file or directory
mkdir: cannot create directory `/opt/hqagent/': File exists
ln: creating symbolic link `/opt/hqagent/agent-current': File exists
useradd: user 'hqagent' already exists
groupadd: group 'hqagent' already exists
chown: cannot access `/opt/agent-5.8.1/': No such file or directory
chmod: cannot access `hqagent.sh': No such file or directory
error reading information on service hqagent.sh: No such file or directory
-bash: line 1: 10.145.34.6: command not found
-bash: line 2: 10.145.6.10: command not found
./hq-install.sh: line 29: /opt/agent-5.8.1/bin/hq-agent.sh: No such file or directory
It appears that the problem is that you run this script on the "master" server, but somehow expect the branches of your if-statement to be run on the remote hosts. You need to factor those branches out into their own files, copy them to the remote hosts along with the other files, and in your if-statement, each branch should just be a ssh command to the remote host, triggering the script you copied over.
So your master script would look something like:
#!/bin/bash
export AGENT=agent-x86-64-linux-5.8.1.tar.gz
export AGENT_PROPERTIES_NONDMZ=agent.properties.nondmz
export agent_INIT=agent.sh
# Scripts containing the stuff you want done on the remote hosts
centos_setup=centos_setup.sh
other_setup=other_setup.sh
echo "####Installing hqagent####"
while read host; do
echo " ++ Copying files to $host"
scp $AGENT $AGENT_PROPERTIES_NONDMZ $agent_INIT root#$host:/opt
echo -n " ++ Running remote part on $host "
if ssh -n root#$host '[ "$(awk "/CentOS/{print}" /etc/*release)" ] '
then
echo "(centos)"
scp $centos_setup root#$host:/opt
ssh root#host "/opt/$centos_setup"
else
echo "(generic)"
scp $other_setup root#$host:/opt
ssh root#host "/opt/$other_setup"
fi
done < hosts.txt
The contents of the two auxiliary scrips would be the current contents of the if-branches in your original.

Resources