Transfer multiple files on single sftp connection in bash [duplicate] - linux

This question already has answers here:
Unix sftp - mput command - transfer all files with a specific prefix
(2 answers)
Closed 3 years ago.
I want to transfer mutiple files on single sftp connection from a folder, where new files are continuously generating, through shell script.
I'm taking approach from this answer using heredocs but failed.
Loop inside "heredoc" in shell scripting
Something like this below code
sftp -P 8922 <server> <<EOF
while [[ true ]]
do
listOfFiles=$(ls -1)
if [[ ! -z $listOfFiles ]]
then
put * /somedir
fi
done
EOF
How can i achieve this?

sftp is not a shell, it doesn't execute scripts like this.
You need to execute a script that prints all the put commands, and pipe it to sftp.
for i in *
do
echo "put $i /somedir"
done | sftp -P 8922 <server>

Related

How do I properly use SSH heredoc?

This question is somewhat related to the question I asked here, but it has not been adequately answered. What interests me here is the following:
When I run the command type -t test on a remote computer, I get the answer 'function' because the 'test' is an existing function inside the .bashrc file on the remote computer.
However, when I run this SSH command on the local computer,
s="$(
ssh -T $HOST <<'EOSSH'
VAR=$(type -f test)
echo $VAR
EOSSH
)"
echo $s
I don't get anything printed. The first question would be how do I make this work?
The second question builds on the previous one. That is, my ultimate goal is to define on a local computer which function I want to check on a remote computer and come up with an adequate answer, ie.:
a="test"
s="$(
ssh -T $HOST <<'EOSSH'
VAR=$(type -f $a)
echo $VAR
EOSSH
)"
echo $s
So, I would like the variable s to be equal to 'function'. How to do it?
how do I make this work?
Either load .bashrc (. .bashrc) or start an interactive session (bash -i).
Because your work is not-interactive, if you want .bashrc loaded and it has no protection against non-interactive use, just load it. If not, maybe move your function somewhere else, to something you can source. If not, be prepared that interactive session may print /etc/motd and /etc/issue and other interactive stuff.
Remove -T - you do not need a tty for non-interactive work.
I would like the variable s to be equal to 'function'. How to do it?
I recommend using declare to transfer all the work and context that you need, which is flexible and works generically, preserves STDIN and doesn't require you to deal with the intricacies escaping inside a here document. Specifically request bash shell from the remote and use printf "%q" to properly escape all the data.
functions_to_check=(a b c)
fn_exists() { [[ "$(LC_ALL=C type -t -- "$1" 2>/dev/null)" = function ]]; }
work() {
for f in "${functions_to_check[#]}"; do
if fn_exists "$f"; then
echo "Great - function $f exists!"
else
echo "Och nuu - no function $f!"
fi
done
}
ssh "$host" "$(printf "%q " bash -c "
$(declare -p function_to_check) # transfer variables
$(declare -f fn_exists work) # transfer functions
work # run the work to do
")"

How do I get the files from SFTP server and move them to another folder in bash script?

How do I get the one by one files from SFTP server and move them do another folder in Ubuntu bash script?
#!bin/sh
FOLDER=/home/SFTP/Folder1/
sftp SFTP#ip_address
cd /home/FSTP/Folder1/
for file in "$FOLDER"*
<<EOF
cd /home/local/Folder1
get $file
EOF
mv $file /home/SFTP/Done
done
I know it's not right, but i've tried my best and if anyone can help me, i will appreciate it. Thanks in advance.
OpenSSH sftp is not very powerful client for such tasks. You would have to run it twice. First to collect list of files, use the list to generate list of commands, and execute those in a second run.
Something like this:
# Collect list of files
files=`sftp -b - user#example.com <<EOF
cd /source/folder
ls
EOF`
files=`echo $files|sed "s/.*sftp> ls//"`
# Use the list to generate list of commands for the second run
(
echo cd /source/folder
for file in $files; do
echo get $file
echo rename $file /backup/folder/$file
done
) | sftp -b - user#example.com
Before you run the script on production files, I suggest, you first output the generated command list to a file to check, if the results are as expected.
Just replace the last line with:
) > commands.txt
Maybe use SFTP internal command.
sftp get -r $remote_path $local_path
OR with the -f option to flush files to disk
sftp get -rf $remote_path $local_path

Perform SSH remote cmd exec on multiple local servers from input (sshpass?) [duplicate]

This question already has answers here:
Loop through a file with colon-separated strings
(2 answers)
Using a variable's value as password for scp, ssh etc. instead of prompting for user input every time
(10 answers)
How to automate password entry?
(5 answers)
Closed 4 years ago.
I am currently looking for a solution for executing remote commands on multiple local servers from an input file containing the 'user : password' in the following format:
jboss5:manager:192.168.1.101
database1:db01:192.168.20.6
server8:localnet:192.168.31.83
x:z:192.168.1.151
test:mynet:192.168.35.44
.... and others
Some commands I wish to execute remotely:
cd $HOME; ./start_script.sh; wget 192.168.1.110/monitor.sh; chmod +x monitor.sh; ./monitor.sh
I know there is a utility called "sshpass" but not sure how I could apply this utility for my needs.
I am open to any ideas in order to fulfill my need, any help would be very appreciated!
Thanks
Did you think about using ssh-keys (check man ssh-keygen)? You will be able to connect without password input ...
However if you cant, just try:
for i in $(< your_file); do
user=$(echo $i | cut -d: -f1)
pass=$(echo $i | cut -d: -f2)
ip=$(echo $i | cut -d: -f3)
sshpass -p $pass ssh $user#$ip bash -c "you commands &"
done
Instead of using cd $HOME use the full path with yours scripts names.And dont forget the & for send the process in background...

Assigning variables inside remote shell script execution over SSH [duplicate]

This question already has answers here:
is it possible to use variables in remote ssh command?
(2 answers)
Closed 6 years ago.
I am trying to execute some shell script on a remote server via SSH.
Given below is a code sample:
ssh -i $KEYFILE_PATH ubuntu#$TARGET_INSTANCE_IP "bash" << EOF
#!/bin/bash
cat /home/ubuntu/temp.txt
string=$(cat /home/ubuntu/temp.txt )
echo $string
EOF
cat prints the expected result but
$string prints nothing.
How do I store the return value of cat in a variable?
You need to make the content of Here doc literal otherwise they will be expanded in the current shell, not in the desired remote shell.
Quote EOF:
ssh .... <<'EOF'
...
...
EOF
You should be able to simply do this:
ssh -i $KEYFILE_PATH ubuntu#$TARGET_INSTANCE_IP "bash" <<'_END_'
cat /home/ubuntu/temp.txt
string=$(cat /home/ubuntu/temp.txt)
echo $string
_END_
<<'_END_' ... _END_ is called a Here Document literal, or "heredoc" literal. The single quotes around '_END_' prevent the local shell from interpreting variables and commands inside the heredoc.
The intermediate shell is not required (assuming you use bash on the remote system).
Also you dont have to use an intermediate HERE-DOC. Just pass a multiline Command:
ssh -i $KEYFILE_PATH ubuntu#$TARGET_INSTANCE_IP '
cat /home/ubuntu/temp.txt
string=$(cat /home/ubuntu/temp.txt )
echo $string
'
Note I am using single quotes to prevent evaluation on the local shell.

How to include stdout/stderr redirection with command subsitution when command passed as a variable? [duplicate]

This question already has answers here:
Why does shell ignore quoting characters in arguments passed to it through variables? [duplicate]
(3 answers)
Closed 6 years ago.
I have the following script to detect when my network comes back on after restarting my router:
#!/bin/bash
pingCommand="ping 192.168.1.1 -c 3 &>/dev/null"
while [[ ! $($pingCommand) ]]; do
sleep 3s;
done
However, when run in the terminal, it prints:
ping: unknown host &>/dev/null
I ran the script with the -x option (to enable debugging) and found that
ping 192.168.1.1 -c 3 &>/dev/null
was being executed in the subshell as
ping 192.168.1.1 -c 3 '&>/dev/null'
How do I change my command substitution call so that bash does not put single quotes around the output redirection?
Don't store commands in variables. Use functions. They handle redirections and pipes without the quoting issues that plague variables.
It also doesn't make sense to try to capture ping's output when you're redirecting all of that output to /dev/null. If you just want to know if it worked or not, check its exit code.
pingCommand() {
ping 192.168.1.1 -c 3 &>/dev/null
}
while ! pingCommand; do
sleep 3s;
done
Use eval:
#!/bin/bash
pingCommand="ping 192.168.1.1 -c 3 &>/dev/null"
# set -x # uncomment to see what's going on
while ! eval $pingCommand ; do
sleep 3s;
done
And you do not need the [[ ]] (expression evaluation) or the $() (output capture).
Of course, as John Kugelman suggested in another answer, using the functions avoids all the potential pitfalls associated with the eval.

Resources