inotifywait does not work after running a period of time - inotifywait

I has running a daemon program to monitor a specific directory file changes, at the beginning, program is running normally, but after a period of time, inotifywait does work at the time of file changes. When i restart the program, it gets back to normal again. This is my shell script:
#!/bin/sh
./etc/puppet/modules/config.sh
puppetmaster=`grep -w server ${puppet_config} | awk -F'=' '{print $2}'`
/usr/local/bin/inotifywait -mrq -e modify ${log_dir}| while read D E F
do
/usr/bin/rsync -i -p -H -S -z -r -A -o -g -a --port=${port} \
--timeout=600 --exclude='.svn/' --exclude='.git/' ${log_dir}/ \
rsync://${puppetmaster}/log_dir > /dev/null
done
Please someone help me.Thanks..

Related

Bash increase pid kernel to unlimited for huge loop

I've been try to make cURL on a huge loop and I run the cURL into background process with bash, there are about 904 domains that will be cURLed
and the problem is that 904 domains can't all be embedded because of the PID limit on the Linux kernel. I have tried adding pid_max to 4194303 (I read in this discussion Maximum PID in Linux) but after I checked only domain 901 had run in background proccess, before I added pid_max is only around 704 running in the background process.
here is my loop code :
count=0
while IFS= read -r line || [[ -n "$line" ]];
do
(curl -s -L -w "\\n\\nNo:$count\\nHEADER CODE:%{http_code}\\nWebsite : $line\\nExecuted at :$(date)\\n==================================================\\n\\n" -H "X-Gitlab-Event: Push Hook" -H 'X-Gitlab-Token: '$SECRET_KEY --insecure $line >> output.log) &
(( count++ ))
done < $FILE_NAME
Anyone have another solution or fix it to handle huge loop to run cURL into background process ?
a script example.sh can be created
#!/bin/bash
line=$1
curl -s -L -w "\\n\\nNo:$count\\nHEADER CODE:%{http_code}\\nWebsite : $line\\nExecuted at :$(date)\\n==================================================\\n\\n" -H "X-Gitlab-Event: Push Hook" -H 'X-Gitlab-Token: '$SECRET_KEY --insecure $line >> output.log
then the command could be (to limit number of running process at a time to 50)
xargs -n1 -P50 --process-slot-var=count ./example.sh < "$FILE_NAME"
Even if you could run that many processes in parallel, it's pointless - starting that many DNS queries to resolve 900+ domain names in a short span of time will probably overwhelm your DNS server, and having that many concurrent outgoing HTTP requests at the same time will clog your network. A much better approach is to trickle the processes so that you run a limited number (say, 100) at any given time, but start a new one every time one of the previously started ones finishes. This is easy enough with xargs -P.
xargs -I {} -P 100 \
curl -s -L \
-w "\\n\\nHEADER CODE:%{http_code}\\nWebsite : {}\\nExecuted at :$(date)\\n==================================================\\n\\n" \
-H "X-Gitlab-Event: Push Hook" \
-H "X-Gitlab-Token: $SECRET_KEY" \
--insecure {} <"$FILE_NAME" >output.log
The $(date) result will be interpolated at the time the shell evaluates the xargs command line, and there is no simple way to get the count with this mechanism. Refactoring this to put the curl command and some scaffolding into a separate script could solve these issues, and should be trivial enough if it's really important to you. (Rough sketch:
xargs -P 100 bash -c 'count=0; for url; do
curl --options --headers "X-Notice: use double quotes throughout" \
"$url"
((count++))
done' _ <"$FILE_NAME" >output.log
... though this will restart numbering if xargs receives more URLs than will fit on a single command line.)

command loop in sh script

I'm creating a sh script on my raspberry for a timelapse.
I've included in the script 4 command that will successively take place, each command tested and working. Now my question is: how to come back to the first command after the last one, indefinitely?
#!/bin/bash
sudo raspistill -w 1024 -h 768 -o /home/pi/timelapse/a%04d.jpg -t 600000 -tl 30000
sudo kill $(ps ax | grep 'timelapse' | awk '{print $1}')
sudo avconv -r 10 -i /home/pi/timelapse/a%04d.jpg -r 10 -vcodec libx264 -crf 20 -g 15 timelaps$
sudo rm /home/pi/timelapse/*.jpg
So after sudo rm /home/pi/timelapse/*.jpg I want to go back to the first command.
Would you have any idea?
thanks.
You can use a loop:
#!/bin/sh
while true; do
...
done
or, re-invoke the script:
#!/bin/sh
...
exec $0 "$#"
Frankly, either one of these seems risky in your case since you're doing no error checking at all, and you run the risk of entering a relatively fast loop of commands continuously failing. At the very least, you should pause for a bit by using while sleep 1; instead of while true;

Running bash scripts parallel in Linux

I am trying to run a script (1.sh)
spin -a /home/files/1/1.pml;
gcc -O2 -DXUSAFE -DSAFETY -DNOCLAIM -w -o pan pan.c >log1.txt;
./pan -m100000 >log2.txt;
spin -p -s -r -X -v -n123 -l -g -k /home/files/1/1.pml.trail \
-u10000 /home/files/1/1.pml >log3.txt;
The command spin -a ...; generates temporary files (pan.c, pan.h) which is used by the next gcc -O2.. command. If I run the script in terminal it creates the temporary files in the same location.
I want to run multiple scripts parallelly. I tried two things, first to write a script to run then in a loop in background (parallel.sh)
for((i=1;i<1800;i++))
do
/home/files/$i/$i.sh &
done
and secondly use parallel gnu parallel -j0 sh /home/files/{}/{}.sh ::: {1..1800}.
Both method created temp file in the location from where they were called from instead of the script location.
For example if I run the script 'parallel.sh' from home/files the temp file are created in "home/files" instead of the location "home/files/1","home/files/2", etc.
Please suggest a method so that the temporary file generated by the script 1.sh,2.sh,.. are created in the directory /home/file/1/, /home/files/2/,.. respectively while I run the parallel script parallel.sh or parallel GNU in terminal from location /home.
The trick is to change the working directory for each command.
When your computer can really run up to 1800 such processes at the same time without heating up the climate:
for i in {1..1800}; do (cd $i && ./$i.sh) & done
When running in parallel, and your processes are cpu-bound, it usually does not gain throughput when running more than the number of processors:
seq 1 1800 | xargs -n1 -P8 -I% sh -c 'cd % && ./%.sh'
Try:
parallel 'cd /home/files/{}; sh {}.sh' ::: {1..1800}
It will run one process per core, and may be faster than '-j0' (only testing can tell with certainty).
If your scripts only vary by the number, consider rewriting it as a general script or bash function that takes the number as an argument:
spinit() {
num=$1
spin -a /home/files/$num/$num.pml;
gcc -O2 -DXUSAFE -DSAFETY -DNOCLAIM -w -o pan pan.c >log1.txt;
./pan -m100000 >log2.txt;
spin -p -s -r -X -v -n123 -l -g -k /home/files/$num/$num.pml.trail \
-u10000 /home/files/$num/$num.pml >log3.txt;
}
export -f spinit
parallel 'cd /home/files/{}; spinit {}' ::: {1..1800}

ssh tunneled command output to file

I have an old Syno NAS and wish to use the "shred" command to wipe this disks inside. The idea is to let the command run to complete on the box itself without the need of a computer.
So far I have managed...
1) to get the right parameters for 'shred'
* runs in the background using the &
2) get that command to output the progress (-v option) to a file shred.txt
* to see from the file what the progress is
shred -v -f -z -n 2 /dev/hdd 2>&1 | tee /volume1/backup/shred.txt &
3) ssh tunnel the command so I can turn off my laptop while its running
ssh -n -f root#host "sh -c 'nohup /opt/bin/shred -f -z -n 2 /dev/sdd > /dev/null 2>&1 &'"
The problem is that I can't combine 2) and 3)
I tried to combine them like this, but the resulting file remained empty:
ssh -n -f root#host "sh -c 'nohup /opt/bin/shred -f -z -n 2 /dev/sdd 2>&1 | tee /volume1/backup/shred.txt > /dev/null &'"
It might be a case of the NOOBS but I can't figure out how to get this done.
Any suggestions?
Thanks. Vince
Commands sh and tee are not needed in here:
ssh -n root#host 'nohup /opt/bin/shred -f -z -n 2 /dev/sdd 2>&1 >/volume1/backup/shred.txt &' >/dev/null
The final >/dev/null is optional, it will just disregard any greetings from other hosts.
Tried the following command (based on Grzegorz suggestion) and included the opening date stamp and the before mentioned - stupidly forgotten - verbose switch. Last version of the command string:
ssh -n root#host 'date > /volume1/backup/shred_sda.txt; nohup /opt/bin/shred -v -f -z -n 4 /dev/sda 2>&1 >> /volume1/backup/shred_sda.txt # >/dev/null'
The last thing to figure out is how to include the date stamp when the shred command has completed.

Processing data with inotify-tools as a daemon

I have a bash script that processes some data using inotify-tools to know when certain events took place on the filesystem. It works fine if run in the bash console, but when I try to run it as a daemon it fails. I think the reason is the fact that all the output from the inotifywait command call goes to a file, thus, the part after | while doesn't get called anymore. How can I fix that? Here is my script.
#!/bin/bash
inotifywait -d -r \
-o /dev/null \
-e close_write \
--exclude "^[\.+]|cgi-bin|recycle_bin" \
--format "%w:%&e:%f" \
$1|
while IFS=':' read directory event file
do
#doing my thing
done
So, -d tells inotifywait to run as daemon, -r to do it recursively and -o is the file in which to save the output. In my case the file is /dev/null because I don't really need the output except for processing the part after the command (| while...)
You don't want to run inotify-wait as a daemon in this case, because you want to continue process output from the command. You want to replace the -d command line option with -m, which tells inotifywait to keep monitoring the files and continue printing to stdout:
-m, --monitor
Instead of exiting after receiving a single event, execute
indefinitely. The default behaviour is to exit after the
first event occurs.
If you want things running in the background, you'll need to background the entire script.
Here's a solution using nohup: (Note in my testing, if I specified the -o the while loop didn't seem to be evaluated)
nohup inotifywait -m -r \
-e close_write \
--exclude "^[\.+]|cgi-bin|recycle_bin" \
--format "%w:%&e:%f" \
$1 |
while IFS=':' read directory event file
do
#doing my thing
done >> /some/path/to/log 2>&1 &

Resources