Does the rsync command will end once it is completed - linux

I'm running a command to sync the folder.
I would like to know whether the rsync command will keep on running even if all files are synced.
Since I'm having script
#!/bin/bash
function simSync {
ssh zsclxengcc1d mkdir -p $RESULTS
rsync -avzh --include=d3plot* --include=binout* $SCRATCH/ HOST1:$RESULTS
}
# Sync the Files
simSync
#Run simulation
mpirun -report-bindings $SOLVER ncpus=$NCPU i=$IFILE memory=${MEMORY}m memory2=$(($MEMORY/$NCPU))m && cleanup
Does this code start the Sync process and run simulation immediately.
I need to end up all progress once run simulation command is completed

Related

Can not start a script in monit

I have a script which have to run another script in background:
#!/bin/bash
( cd /var/lib/docker/volumes/hostcontrol-pipe/_data/ && ./run-pipe.sh ) &
Firstly it changes directory and runs a script. run-pipe.sh creates named pipes in its directory.
And I have a monit config file to monitor this script and to restart it if it's not running:
check program check-pipe with path /bin/bash -c "echo 'ping' > /var/lib/docker/volumes/hostcontrol-pipe/_data/host-pipe" with timeout 1 seconds
if status != 0 then
restart
start program = "/var/lib/docker/volumes/hostcontrol-pipe/_data/monit.sh"
First line checks the script is running writing to its pipe, it works.
The line "start program" doesn't work - the script doesn't run and it's abscent in the "ps ax". But I see in "sudo monit -vI":
'check-pipe' start: '/var/lib/docker/volumes/hostcontrol-pipe/_data/monit.sh'
'check-pipe' started
'check-pipe' program started
So, why monit cant run the script? I tried different variants, but cant run it. I can run it without changing directory (cd), but this is nessecary.
Actually monit could not run a script in background because where wasn't output. After adding output file it started to work.
start program = "/bin/bash -c 'cd /var/lib/docker/volumes/hostcontrol-pipe/_data && ./run-pipe.sh > 1.log &'"
Or otherwise /dev/null as output stream.

bash cript: ssh create file; sleep 3m; rm file;

Trying to create a script that will ssh into server, backup some files, sleep for 3 minutes, then remove the files.
While it's sleeping the same script is back to local and rsync the file. Then when the 3 minutes are up... file is removed.
Just trying this so as to not connect twice with ssh.
ssh $site "
tar -zcf $domain-$date.tar.gz $path;
{ sleep 3m && rm -f $domain-$date.tar.gz };
"
rsync -az $site:$domain-$date.tar.gz ~/WebSites/$domain/BackUp/$date;
I tried with command grouping with (), to create a sub command, but I think the variable would not be read. Not sure.
Your ssh command will sleep for 3 minutes and remove the files, then your script proceeds to try to rsync the files that got removed. There is no easy workaround for having your first ssh command sleep while your own script proceeds to run rsync.
Do either:
ssh into the server twice. After rsync completes, ssh into the server again and remove the files.
Tell rsync to remove the files after it's synced them. Add the --remove-source-files to rsync.

inotifywait misses events while script is running

I am running an inotify wait script that triggers a bash script to call a function to synchronize my database of files whenever files are modified, created, or deleted.
#!/bin/sh
while inotifywait -r -e modify -e create -e delete /var/my/path/Documents; do
cd /var/scripts/
./sync.sh
done
This actually works quite well except that during the 10 seconds it takes my sync script to run the watch doesn't pickup any additional changes. There are instances where the sync has already looked at a directory and an additional change occurs that isn't detected by inotifywait because it hasn't re-established watches.
Is there any way for the inofitywait to trigger the script and still maintain the watch?
Use the -m option so that it runs continuously, instead of exiting after each event.
inotifywait -q -m -r -e modify -e create -e delete /var/my/path/Documents | \
while read event; do
cd /var/scripts
./sync.sh
done
This would actually have the opposite problem: if multiple changes occur while the sync script is running, it will run it again that many times. You might want to put something in the sync.sh script that prevents it from running again if it has run too recently.

Linux Debian Run commands at boot in init script

I'm new to Linux (obviously) and I need to run some commands whenever my Linux server boots without typing them into the console manually.
I have this file called overpass.conf that runs on boot perfectly:
description 'Overpass API dispatcher daemon'
env DB_DIR=/var/www/osm/db/
env EXEC_DIR=/var/www/osm/
start on (local-filesystems and net-device-up)
stop on runlevel [!2345]
pre-start script
rm $DB_DIR/osm3s* || true
rm /dev/shm/osm3s* || true
end script
exec $EXEC_DIR/bin/dispatcher --osm-base --db-dir=$DB_DIR
However, I want to also run the following:
cp -pR "/root/osm-3s_v0.7.4/rules" "/var/www/osm/db/"
nohup /var/www/osm/bin/dispatcher --areas --db-dir="/var/www/osm/db/" &
chmod 666 "/var/www/osm/db/osm3s_v0.7.4_areas"
nohup /var/www/osm/bin/rules_loop.sh "/var/www/osm/db/" &
I have tried adding them to the bottom of the file, adding exec to the execution commands and even tried removing the quotes, then testing with start overpass but it throws errors if I add any commands to the original ones.
How can I go about executing those 4 commands after the original ones? I'm a noob in distress. Thanks!
Edit
I solved it with these commands:
vi /etc/init.d/mystartup.sh
-Add commands to the script
chmod +x /etc/init.d/mystartup.sh
update-rc.d mystartup.sh defaults 99
There's also /etc/rc.local that is executed at the end of the boot process.

Rsync cronjob that will only run if rsync isn't already running

I have checked for a solution here but cannot seem to find one. I am dealing with a very slow wan connection about 300kb/sec. For my downloads I am using a remote box, and then I am downloading them to my house. I am trying to run a cronjob that will rsync two directories on my remote and local server every hour. I got everything working but if there is a lot of data to transfer the rsyncs overlap and end up creating two instances of the same file thus duplicate data sent.
I want to instead call a script that would run my rsync command but only if rsync isn't running?
The problem with creating a "lock" file as suggested in a previous solution, is that the lock file might already exist if the script responsible to removing it terminates abnormally.
This could for example happen if the user terminates the rsync process, or due to a power outage. Instead one should use flock, which does not suffer from this problem.
As it happens flock is also easy to use, so the solution would simply look like this:
flock -n lock_file -c "rsync ..."
The command after the -c option is only executed if there is no other process locking on the lock_file. If the locking process for any reason terminates, the lock will be released on the lock_file. The -n options says that flock should be non-blocking, so if there is another processes locking the file nothing will happen.
Via the script you can create a "lock" file. If the file exists, the cronjob should skip the run ; else it should proceed. Once the script completes, it should delete the lock file.
if [ -e /home/myhomedir/rsyncjob.lock ]
then
echo "Rsync job already running...exiting"
exit
fi
touch /home/myhomedir/rsyncjob.lock
#your code in here
#delete lock file at end of your job
rm /home/myhomedir/rsyncjob.lock
To use the lock file example given by #User above, a trap should be used to verify that the lock file is removed when the script is exited for any reason.
if [ -e /home/myhomedir/rsyncjob.lock ]
then
echo "Rsync job already running...exiting"
exit
fi
touch /home/myhomedir/rsyncjob.lock
#delete lock file at end of your job
trap 'rm /home/myhomedir/rsyncjob.lock' EXIT
#your code in here
This way the lock file will be removed even if the script exits before the end of the script.
A simple solution without using a lock file is to just do this:
pgrep rsync > /dev/null || rsync -avz ...
This will work as long as it is the only rsync job you run on the server, and you can then run this directly in cron, but you will need to redirect the output to a log file.
If you do run multiple rsync jobs, you can get pgrep to match against the full command line with a pattern like this:
pgrep -f rsync.*/data > /dev/null || rsync -avz --delete /data/ otherhost:/data/
pgrep -f rsync.*/www > /dev/null || rsync -avz --delete /var/www/ otherhost:/var/www/
As a definite solution kill rsync processes before new one starts in crontab.

Resources