Shorthand for user home directory using parallel-scp - linux

I'm trying to upload the same files to a list of hosts using parallel-scp. The files need to go into each hosts separate user home directory: eg. one host looks like this: /home/host1user/www/ another like this /home/host2user/www/ and yet another like this /home/anotheruser/www/
The servers.txt file looks like this:
12.123.12.123 host1user
12.123.12.124 host2user
12.123.12.125 anotheruser
I've tried using this line:
parallel-scp -v -e error -h servers.txt ./testfile.txt ~/www
Doing this throws an error saying that
"scp: /home/localusername/www: No such file or directory"
which makes sense.
But, how do I get the correct username in the target directory?

Put the following into servers.txt
host1 user1
host2 user2
Your script can be something similar to this:
while read hostname, username; do
scp testfile.txt $username#$hostname:/home/$username/www &
sleep 10
done < servers.txt
The "&" after the scp command puts it in the background so that you can launch another scp without having to wait for the previous scp to complete... allowing you to do multiple scp in parallel.
The sleep 10 is to prevent you from launching way too many scp processes in a short amount of time, especially if you do that to the same destination server, your connection will be terminated because it's considered an attack to that server.
Edit the sleep command to the number of seconds to wait accordingly... but if you have only a small number of servers/files then you don't even need it.

Related

Linux shell script to download file(s) from server to PC while connected with putty

I am connected to a server through putty, and I want to download (to my PC) certain files on a regular basis using a shell script. Specifically, these are the files...
ls -t ~/backup | head -n2
What is the best strategy for this? I was trying with command line FTP but I am prompted to login to something. I'm already logged into the server that has the files I need to download, so I am missing something.
The SSH protocol can be a good way, with scp command. You can take a look at this thread
To automate the process and script a solution, you will need use password-less ssh and ssh keys.
The first step will be to get the list of files to copy and so:
fils=$(ssh username#host ls -t ~/backup | head -n2)
Then once we have the files read into a variable fils, we can loop on the entries and run a secure copy command:
while read fyle
do
scp username#host:~/"$fyle" "$fyle"
done <<< "$fils"

How to create one logfile when ssh-ing to multiple servers

I'd like to create a bash script to automatically connect myself to a bunch of servers, execute some commands there and save the output of these commands in one logfile on the server I use to connect myself to all the other servers.
So far I was able to create a logfile on each of the servers I'm connecting myself to or to display the output of each of the commands on the console of the server I use to get to all the other servers.
My script currently looks like this (I know about for loops, but I don't want to use them in this case because I need to execute different commands on each server):
#!/bin/bash
ssh server1 <<EOF
hostname
printf '\n'
mount
EOF
printf '\n'
printf '\n'
printf '\n'
ssh server2 <<EOF
hostname
printf '\n'
mount
EOF
...
My idea was to use the &>> operator, because I need to know if all commands where executed successfully or not. In the end I'd like to have only one logfile which should look somewhat like this:
server1
output of mount
server 2
output of mount
...
So, how can I manage to create only one large logfile that contains the results of all executed commands? Also, will this script still work correctly if I make use of the ssh -T option to get rid of the message "Pseudo-terminal will not be allocated because stdin is not a terminal."? And do I have to escape special characters like / _ - when using mount in my script to mount something?
Thanks in advance!
I suggest using Open source utilities like logstash or fluentd.
I would use fabric, which is a tool to interact with several servers using ssh. It provides operations for executing remote shell commands.
For your example, the fabfile:
from fabric.api import run, sudo
def my-task():
run('hostname')
run('mount')
An you can execute it:
fab -H server1,server2 my-task
Output will be via standard output of the server you are executing so you can easily redirect it to a file:
fab -H server1,server2 my-task | my-task.log

FTP status check using a variable - Linux

I am doing an ftp and I want to check the status. I don't want to use '$?' as mostly it returns 0 (Success) for ftp even though internally ftp didn't go through.
I know I can check the log file and do a grep from there for "Transfer complete" (221 status). That works fine but I don't want to do it as I have many different reports doing ftp. So creating multiple log files for all of them is what I want to avoid.
Can I get the logged information in a local script variable and process it inside the script itself?
Something similar to these (I've tried both but neither worked):
Grab FTP output in BASH SCRIPT
FTP status check whether successful or not
Below is something similar to what I am trying to do:
ftp -inv ${HOST} > log_file.log <<!
user ${USER} ${PASS}
bin
cd "${TARGET}"
put ${FEEDFILE}
bye
!
Any suggestions on how can I get the entire ftp output in a script variable and then check it within the script?
To capture stdout to a variable you can use bash's command substitution, so either OUTPUT=`cmd` or OUTPUT=$(cmd).
Here's an example how to capture the output from ftp in your case:
CMDS="user ${USER} ${PASS}
bin
cd \"${TARGET}\"
put \"${FEEDFILE}\"
bye"
OUTPUT=$(echo "${CMDS}" | ftp -inv ${HOST})

how to send different commands to multiple hosts to run programs in Linux

I am an R user. I always run programs on multiple computers of campus. For example, I need to run 10 different programs. I need to open PuTTY 10 times to log into the 10 different computers. And submit each of programs to each of 10 computers (their OS is Linux). Is there a way to log in 10 different computers and send them command at same time? I use following command to submit program
nohup Rscript L_1_cc.R > L_1_sh.txt
nohup Rscript L_2_cc.R > L_2_sh.txt
nohup Rscript L_3_cc.R > L_3_sh.txt
First set up ssh so that you can login without entering a password (google for that if you don't know how). Then write a script to ssh to each remote host to run the command. Below is an example.
#!/bin/bash
host_list="host1 host2 host3 host4 host5 host6 host7 host8 host9 host10"
for h in $host_list
do
case $h in
host1)
ssh $h nohup Rscript L_1_cc.R > L_1_sh.txt
;;
host2)
ssh $h nohup Rscript L_2_cc.R > L_2_sh.txt
;;
esac
done
This is a very simplistic example. You can do much better than this (for example, you can put the ".R" and the ".txt" file names into a variable and use that rather than explicitly listing every option in the case).
Based on your topic tags I am assuming you are using ssh to log into the remote machines. Hopefully the machine you are using is *nix based so you can use the following script. If you are on Windows consider cygwin.
First, read this article to set up public key authentication on each remote target: http://www.cyberciti.biz/tips/ssh-public-key-based-authentication-how-to.html
This will prevent ssh from prompting you to input a password each time you log into every target. You can then script the command execution on each target with something like the following:
#!/bin/bash
#kill script if we throw an error code during execution
set -e
#define hosts
hosts=( 127.0.0.1 127.0.0.1 127.0.0.1)
#define associated user names for each host
users=( joe bob steve )
#counter to track iteration for correct user name
j=0
#iterate through each host and ssh into each with user#host combo
for i in ${hosts[*]}
do
#modify ssh command string as necessary to get your script to execute properly
#you could even add commands to transfer the file into which you seem to be dumping your results
ssh ${users[$j]}#$i 'nohup Rscript L_1_cc.R > L_1_sh.txt'
let "j=j+1"
done
#exit no error
exit 0
If you set up the public key authentication, you should just have to execute your script to make every remote host do their thing. You could even look into loading the users/hosts data from file to avoid having to hard code that information into the arrays.

How can I scp a file and run an ssh command asking for password only once?

Here's the context of the question:
In order for me to be able to print documents at work, I have to copy the file over to a different computer and then print from that computer. (Don't ask. It's complicated and there is not another viable solution.) Both of the computers are Linux and I work in bash. The way I currently do this is I scp the file over to the print computer and then ssh in and print from command line.
Here's what I would like to do:
In order to make my life a bit easier, I'd like to combine these two step into one. I could easily write a function that did both these steps, but I would have to provide my password twice. Is there any way to combine the steps so that I only provide my password once?
Before somebody suggests it, key-based ssh-logins are not an option. It has been specifically disabled by the Administrators for security reasons.
Solution:
What I ended up doing was a modification of the second solution Wrikken provided. Simply wrapping up his first suggestion in a function would have gotten the job done, but I liked the idea of being able to print multiple documents without having to type my password once per document. I have a rather long password and I'm a lazy typist :)
So, what I did was take a sequence of commands and wrap them up in a python script. I used python because I wanted to parameterize the script, and I find it easiest to do in python. I cheated and just ran bash commands from python through os.system. Python just handled parameterization and flow control. The logic was as follows:
if socket does not exist:
run bash command to create socket with timeout
copy file using the created socket
ssh command to print using socket
In addition to using a timeout, I also put have an option in my python script to manually close the socket should I wish to do so.
If anyone wants the code, just let me know and I'll either paste-bin it or put it on my git repo.
ssh user#host 'cat - > /tmp/file.ext; do_something_with /tmp/file.ext;rm /tmp/file.ext' < file.ext
Another option would be to just leave an ssh tunnel open:
In ~/.ssh/config:
Host *
ControlMaster auto
ControlPath ~/.ssh/sockets/ssh-socket-%r-%h-%p
.
$ ssh -f -N -l user host
(socket is now open)
Subsequent ssh/scp requests will reuse the already existing tunnel.
Here is bash script template, which follows #Wrikken's second method, but can be used as is - no need to edit user's SSH config file:
#!/bin/bash
TARGET_ADDRESS=$1 # the first script argument
HOST_PATH=$2 # the second script argument
TARGET_USER=root
TMP_DIR=$(mktemp -d)
SSH_CFG=$TMP_DIR/ssh-cfg
SSH_SOCKET=$TMP_DIR/ssh-socket
TARGET_PATH=/tmp/file
# Create a temporary SSH config file:
cat > "$SSH_CFG" <<ENDCFG
Host *
ControlMaster auto
ControlPath $SSH_SOCKET
ENDCFG
# Open a SSH tunnel:
ssh -F "$SSH_CFG" -f -N -l $TARGET_USER $TARGET_ADDRESS
# Upload the file:
scp -F "$SSH_CFG" "$HOST_PATH" $TARGET_USER#$TARGET_ADDRESS:"$TARGET_PATH"
# Run SSH commands:
ssh -F "$SSH_CFG" $TARGET_USER#$TARGET_ADDRESS -T <<ENDSSH
# Do something with $TARGET_PATH here
ENDSSH
# Close the SSH tunnel:
ssh -F "$SSH_CFG" -S "$SSH_SOCKET" -O exit "$TARGET_ADDRESS"

Resources