I'm having issues with my Unix FTP script...
It's only transferring the first three files in the directory that I'm local cd'ing into during the FTP session.
Here's the bash script that I'm using:
#!/bin/sh
YMD=$(date +%Y%m%d)
HOST='***'
USER='***'
PASSWD=***
FILE=*.png
RUNHR=19
ftp -inv ${HOST} <<EOF
quote USER ${USER}
quote PASS ${PASSWD}
cd /models/rtma/t2m/${YMD}/${RUNHR}/
mkdir /models/rtma/t2m/${YMD}/
mkdir /models/rtma/t2m/${YMD}/${RUNHR}/
lcd /home/aaron/grads/syndicated/rtma/t2m/${YMD}/${RUNHR}Z/
binary
prompt
mput ${FILE}
quit
EOF
exit 0
Any ideas?
I had encountered same issue, I have to transfer 400K files but mput * or mput *.pdf was not moving all files in one go
tried timeout :fails
tried -r recursive :fails
tried increasing Data/control timeout in IIS :fails
tried -i
Prompt
scripting fails
Finally went to use portable filezilla connect to from source and transferred the all files
Related
I want to run a script that deletes a files on computer and copies over another file from a connected host using scp command
Here is the script:
#!/bin/bash
echo "Moving Production Folder Over"
cd ~
sudo rm -r Production
scp -r host#192.168.123.456:/home/user1/Documents/Production/file1 /home/user2/Production
I would want to cd into the Production directory after it is copied over. How can I go about this? Thanks!
I am using a bash script in Linux to transfer files to a server. My script is running from cron and I have directed output to a file but I cannot know from logs if the file has been transferred to B server or not.
This is the cron:
1>>/home/owais/script_test/logs/res_sim_script.logs 2>>/home/owais/script_test/logs/res_sim.logs
And the FTP is as below:
cd ${dir}
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
lcd $dir
cd $destDir
bin
prompt
put FILENAME
bye
The only thing that I get in the logs is:
Local directory now Directory_Name
Interactive mode off.
Instead of using FTP, there is rsync. Rsync is a fast and extraordinarily versatile file copying tool. It can copy locally, to or from another host over any remote shell, or to, or from a remote rsync daemon.
More information at the following webpage, https://linux.die.net/man/1/rsync
I have used ftp -inv Host << EOF >> LogFilePath and it worked. Thank you all for the support
I've a simple shell script to transfer daily log file to another Windows FTP.
The problem is if the file is already there, it will still uploading a new one even though the file name is exactly the same.
How to perform a quick check on this script? If the file is there, then it won't proceed with FTP transfer
ftp -n -v $HOST << EOT
user $USER $PASSWD
prompt
bin
mput $FILE
bye
EOT
It is easy in Unix with ftp. First login to the system through ftp and run a ls -ltr command through ftp and list the files in a history.txt file(see below my example) and while transferring the file first check whether that file is already available in history file or not. And if available do not transfer that file. I do it like below:-
HISTORY_FILE="history.txt"
ftp -n -v $HOST << EOT
user $USER $PASSWD
prompt
bin
ls -rtE $HISTORY_FILE
bye
EOT
Now you can use below command to check:-
ISFILENAMEEXIST=$(cat $HISTORY_FILE | grep $FILE)
Now if the file exist in history.txt, do not send that file and if not available send it through ftp.
We need to move all the files from particular folder to ftp server. We wrote a script but getting directory not found exception. Below is the script.
#!/bin/sh
HOST='192.168.1.100'
USER='ramesh'
PASSWD='rameshftp'
ftp -inv $HOST << EOF
user $USER $PASSWD
cd /home/Ramesh
put *.zip
bye
EOF
Our requirement is to copy all the files which resides in some directory in Suse Linux Server and copy to FTP server. for eg: Copy all the contents from "/home/Ramesh" directory and put into ftp server.
You can do this in one line with ncftp:
ncftpput -u username -p password ftp-host-name /path/to/remote/dir /path/to/local/dir/*
See http://linux.die.net/man/1/ncftp for more info
The code tries to ssh from my local server to remote server and runs some commands.
ssh root#$remoteip 'bash -s' <<END3
gcdadirs=`strings binary | egrep '.gcda$'`
for dir in ${gcdadirs[#]}; do
directory=$(dirname ${dir})
echo $dir >> dirstr.txt
mkdir -p $directory
chown $root:$root $directory
chmod 777 $directory
done
END3
the above creates a directory structure on remote server which is working fine.
I want to tar up the same directory structure. so i'm using same logic as above.
ssh root#$remoteip 'bash -s' <<END3
touch emptyfile
tar -cf gcda.tar emptyfile
gcdadirs=`strings binary | egrep '.gcda$'`
for dir in ${gcdadirs[#]}; do
tar -rf gcda.tar $dir
done
END3
The above piece of code should create a tar with all the directories included returned by for loop. i tried the code logic by copying the code to remote server and running it there and it worked. But if I ssh connect from my local server to remote server and try it is not enetring the for loop. it is not appending anything to tar file created with empty file in second line.
Try <<'END3'
Note the quotes around END3, they prevent shell substitutions inside the here-document. You want the $-signs to be transferred to the other side of the ssh connection, not interpreted locally. Same for the backticks.
Extracted from the comments as the accepted answer. Posting as community wiki