WinSCP - Are multiple "put" commands in WinSCP script processed at the same time or in sequence? - winscp

Below you can safely laugh at me but just trying to see what the below command would do.
I am guessing that WinSCP will process the first put command matched by the file mask (Morgs*.*) and only when it finishes processing such files it’ll proceed to the next put command and process any remaining files matched by ..
Or, will it create parallel connections and process both put commands at the same time, synchronously?
I am basically attempting to first process all files that begin with Morgs, this process will skip the .Done file, which will then get picked up at the next put command which scans for all file types at which point only the .Done file will remain…
The command instructions below will be stored in a text file and will be used like from a sample.cmd file:
"C:\Program Files (x86)\WinSCP\WinSCP.exe" /SCRIPT="C:\morgs\secure\local-folder\Test Project\CommandInstructions.txt"
open "Morgs Secure Connection"
cd "/users/morgs/secure/folder/Test Project"
lcd "C:\morgs\secure\local-folder\Test Project"
put -delete -nopreservetime Morgs*.*
put -delete -nopreservetime *.*
bye
Thanks in advance...

WinSCP processes the commands one-by-one.
No parallel upload takes place.

Related

Wait Until Previous Command Completes

I have written a bash script on my MacMini to execute anytime a file has completed downloading. After the file download is complete, the mac mounts my NAS, renames the file, and then copies the file from the mac to the NAS, deletes the file from the mac and then unmounts the NAS.
My issue is, sometimes, the NAS takes a few seconds to mount. When that happens, I receive an error that the file could not be copies because the directory doesn’t exist.
When the NAS mounts instantly, (if the file size is small), the file copies and then the file deletes and the NAS unmounts.
When the file size is large, the copying process stops when the file is deleted.
What I’m looking for is, how do I make the script “wait” until the NAS is mounted, and then how do I make the script again wait until the file copying is complete?
Thank you for any input. Below is my code.
#connect to NAS
open 'smb://username:password#ip_address_of_NAS/folder_for_files'
#I WANT A WAIT COMMAND HERE SO THAT SCRIPT DOES NOT CONTINUE UNTIL NAS IS MOUNTED
#move to folder where files are downloaded
cd /Users/account/Downloads
#rename files and copy to server
for f in *; do
newName="${f/)*/)}";
mv "$f"/*.zip "$newName".zip;
cp "$newName".zip /Volumes/folder_for_files/"$newName".zip;
#I NEED SCRIPT TO WAIT HERE UNTIL FILE HAS COMPLETED ITS TRANSFER
rm -r "$f";
done
#unmount drive
diskutil unmount /Volumes/folder_for_files
I have no longer a mac to try this, but it seems open 'smb://...' is the only command here that does not wait for completion, but does the actual work in the background instead.
The best way to fix this would be to use something other than open to mount the NAS drive. According to this answer the following should work, but due to the lack of a mac and NAS I cannot test it.
# replace `open ...` with this
osascript <<< 'mount volume "smb://username:password#ip_address_of_NAS"'
If that does not work, use this workaround which manually waits until the NAS is mounted.
open 'smb://username:password#ip_address_of_NAS/folder_for_files'
while [ ! -d /Volumes/folder_for_files/ ]; do sleep 0.1; done
# rest of the script
you can use a loop to sleep and then for five seconds for example, and then run smbstatus and if the output contains any string to identify your smb://username:password#ip_address_of_NAS/folder_for_files connection`
when this is found to then start the copying of your files. You could also have a counter variable to stop, after certain number of times to sleep and then check if the connection has been successful too.

Move directory content once it was last modified (The directory) more than X time ago

My question is as follows:
I am trying to create a "buffer" folder (Lets call it "PROCESS") for which i run a 'scp' command on any files in it (*), and mv them after to a dump folder ("BACKUP").
Now, i had a case in which the connection broke in the middle of my bash script thus leaving the files in the PROCESS folder.
In my script i have a check that if i have files in PROCESS folder, i will skip the script iteration (cron job running evey 1m), thus the script will always be skipped.
I need a command/code which checks if i have files in PROCESS folder that moved into the folder more than X time ago (120 mins for the example), and move them back to the "NEW" folder, is there an easy way to do so?
find /path/PROCESS -mmin +120 -exec mv /path/PROCESS/* /path/NEW
This is what i came to for now, but it doesn't appear to work unfortunately.
Please help :)

sftp script failing when one of the sftp statement fails in Linux

I am having an OpenSSH sftp script which transfer the files from a SFTP server (Solaris) to application server (Linux). Here the scenario is the transfer happens from different location and same files are transferred backup to SFTP server different location. But if any of the transfers fail due to file is not available, it is not continuing the remaining sftp commands. Instead it just comes out of the code. Below is the script.
export SSHPASS=*******
/usr/local/bin/sshpass -e sftp -oPort=22 -oBatchMode=no -b - rkwlahtt#10.204.140.14 << !
cd /home/rkwlahtt/Inbound
mget *.*
rm *.*
cd /home/rkwlahtt/Inbound/Adhoc
mget *.*
rm *.*
cd /home/rkwlahtt/Archive/Inbound
mput *.TXT
mput *.txt
cd /home/rkwlahtt/Archive/Adhoc
mput *.xlsx
bye
!
Here in the above script when I am trying to mget from /home/rkwlahtt/Inbound folder and if file doesn't exist, it just comes out of the sftp code instead of going to next command that is cd /home/rkwlahtt/Inbound/Adhoc and mget. This is the same situation while mput too.
This is the first time we are transferring from different location in the same code. And this is creating issue in our transferring.
Please let me know what can be done to resolve this issue.
You can suppress an abort on error on a per-command basis using a - prefix:
-mget *.*
Another options is to remove the -b - switch.
The -b does two things. First it enables a batch mode (= abort of any error), second it sets a script file. Except when you use - instead of a script file name, in which case the commands are read from the standard input, what is the default. You do not need the second effect (as you use - anyway) and you do not want the first.
Even without the switch, you can still feed the commands using input redirection, as you are doing.
Though you need to make sure no command will ask for any input. As then some of your command will be used as the input instead of being executed.
See https://man.openbsd.org/sftp#b

Prevent files to be moved by another process in linux

I have problem with bash script.
I have two cron tasks, which gets some number of files from same folder for further processing.
ls -1h "targdir/*.json" | head -n ${LIMIT} > ${TMP_LIST_FILE}
while read REMOTE_FILE
do
mv $REMOTE_FILE $SCRDRL
done < "${TMP_LIST_FILE}"
rm -f "${TMP_LIST_FILE}"
But then two instances of script run simultaneously same file beeing moved to $SRCDRL which different for instances.
The question is how to prevent files to be moved by different script?
UPD:
Maybe I was little uncleare...
I have folder "targdir" where I store json files. And I have two cron tasks which gets some files from that directory to process. For example in targdir exists 25 files first cron task should get first 10 files and move them to /tmp/task1, second cron task should get next 10 files and move them to /tmp/task2 , e.t.c.
But now first 10 files moves to /tmp/task1 and /tmp/task2.
First and foremost: rename is atomic. It is not possible for a file to be moved twice. One of the moves will fail, because the file is no longer there. If the scripts run in parallel, both list the same 10 files and instead of first 10 files moved to /tmp/task1 and next 10 to /tmp/task2 you may get 4 moved to /tmp/task1 and 6 to /tmp/task2. Or maybe 5 and 5 or 9 and 1 or any other combination. But each file will only end up in one task.
So nothing is incorrect; each file is still processed only once. But it will be inefficient, because you could process 10 files at a time, but you are only processing 5. If you want to make sure you always process 10 if there is enough files available, you will have to do some synchronization. There are basically two options:
Place lock around the list+copy. This is most easily done using flock(1) and a lock file. There are two ways to call that too:
Call the whole copying operation via flock:
flock targdir -c copy-script
This requires that you make the part that should be excluded a separate script.
Lock via file descriptor. Before the copying, do
exec 3>targdir/.lock
flock 3
and after it do
flock -u 3
This lets you lock over part of the script only. This does not work in Cygwin (but you probably don't need that).
Move the files one by one until you have enough.
ls -1h targdir/*.json > ${TMP_LIST_FILE}
# ^^^ do NOT limit here
COUNT=0
while read REMOTE_FILE
do
if mv $REMOTE_FILE $SCRDRL 2>/dev/null; then
COUNT=$(($COUNT + 1))
fi
if [ "$COUNT" -ge "$LIMIT" ]; then
break
fi
done < "${TMP_LIST_FILE}"
rm -f "${TMP_LIST_FILE}"
The mv will sometimes fail, in which case you don't count the file and try to move the next one, assuming the mv failed because the file was meanwhile moved by the other script. Each script copies at most $LIMIT files, but it may be rather random selection.
On a side-note if you don't absolutely need to set environment variables in the while loop, you can do without a temporary file. Simply:
ls -1h targdir/*.json | while read REMOTE_FILE
do
...
done
You can't propagate variables out of such loop, because as part of a pipeline it runs in subshell.
If you do need to set environment variables and can live with using bash specifically (I usually try to stick to /bin/sh), you can also write
while read REMOTE_FILE
do
...
done <(ls -1h targdir/*.json)
In this case the loop runs in current shell, but this kind of redirection is bash extension.
The fact that two cron jobs move the same file to the same path should not matter for you unless you are disturbed by the error you get from one of them (one will succeed and the other will fail).
You can ignore the error by using:
...
mv $REMOTE_FILE $SCRDRL 2>/dev/null
...
Since your script is supposed to move a specific number of files from the list, two instances will at best move twice as many files. Unless they even interfere with each other, then the number of moved files might be less.
In any case, this is probably a bad situation to begin with. If you have any way of preventing two scripts running at the same time, you should do that.
If, however, you have no way of preventing two script instances from running at the same time, you should at least harden the scripts against errors:
mv "$REMOTE_FILE" "$SCRDRL" 2>/dev/null
Otherwise your scripts will produce error output (no good idea in a cron script).
Further, I hope that your ${TMP_LIST_FILE} is not the same in both instances (you could use $$ in it to avoid that); otherwise they'd even overwrite this temp file, in the worst case resulting in a corrupted file containing paths you do not want to move.

scp2 from Linux to Windows via batch script, *.* file mask issues

I'm running a batch file that uses a .ini file to populate the scp2 command (F-Secure's scp2). The batch file when triggered will complete a scp2 of data files from a remote Linux server to the local Windows server.
INI FILE:
REMOTE_FILE="*"
BATCH FILE:
"%SSH_HOME%\scp2" -k %KEYS% -o "AllowedAuthentications publickey" -o "StrictHostKeyChecking off" %USER%#%SERVER%:%REMOTE_DIR%/%REMOTE_FILE%.* %LOCAL_DIR% >> %LOG% 2>&1
When %REMOTE_FILE% was set to "x", this would happily collect all files x.*
However, since changing %REMOTE_FILE% to "*", the scp2 now tries to copy the sub directories on the remote server, which fails as I am not using -r, but also causes a non-zero error code of scp2 which affects subsequent processing in the batch file.
I am assuming that the operating system (not sure which one) is expanding the file mask but I cannot identify how to stop this behaviour and let scp2 expand the file mask. I have tried setting the variable to "*", as well as putting the whole quotes around the whole user/passwd/directory/file , i.e
"%USER%#%SERVER%:%REMOTE_DIR%/%REMOTE_FILE%.*"
but with no success. Any ideas out there, please?
If your intention is to copy just the files, since you are not bothering with the '-r' argument, then perhaps a change of the mask from "" to ".*" might get you what you want?

Resources