Run a shell command when a file is added - linux

I have a folder named images on my linux box.
This folder is connected to a website and the admin of the site has the ability to add pictures to this site. However, when a picture is added, I want a command to run resizing all of the pictures a directory.
In short, I want to know how I can make the server run a specific command when a new file is added to a specific location.

I don't know how people are uploading content to this folder, but you might want to use something lower-tech than monitoring the directory with inotify.
If the protocol is FTP and you have access to your FTP server's log, I suggest tailing that log to watch for successful uploads. This sort of event-triggered approach will be faster, more reliable, and less load than a polling approach with traditional cron, and more portable and easier to debug than something using inotify.
The way you handle this will of course depend on your FTP server. I have one running vsftpd whose logs include lines like this:
Fri May 25 07:36:02 2012 [pid 94378] [joe] OK LOGIN: Client "10.8.7.16"
Fri May 25 07:36:12 2012 [pid 94380] [joe] OK UPLOAD: Client "10.8.7.16", "/path/to/file.zip", 8395136 bytes, 845.75Kbyte/sec
Fri May 25 07:36:12 2012 [pid 94380] [joe] OK CHMOD: Client "10.8.7.16", "/path/to/file.zip 644"
The UPLOAD line only gets added when vsftpd has successfully saved the file. You could parse this in a shell script like this:
#!/bin/sh
tail -F /var/log/vsftpd.log | while read line; do
if echo "$line" | grep -q 'OK UPLOAD:'; then
filename=$(echo "$line" | cut -d, -f2)
if [ -s "$filename" ]; then
# do something with $filename
fi
fi
done
If you're using an HTTP upload tool, see if that tool has a text log file it uses to record incoming files. If it doesn't consider adding some sort of logger function to it, so it'll produce logs that you can tail.

Like John commented the inotify API is a starting point, however you might be interested in some tools that use this API to perform file notification tasks:
For example incron which can be used to run cron-like tasks when file or directory changes are detected.
Or inotify-tools which is a set of command-line tools that you could use to build your own file notification shell script.
If you look at the bottom of the Wiki pake for inotify-tools you will see references to even more tools and support for higher-level languages like Python, Perl or Ruby (example code).

You might want to look at inotify
The inotify API provides a mechanism for monitoring file system events. Inotify can be used to monitor individual files, or to monitor directories. When a directory is monitored, inotify will return events for the directory itself, and for files inside the directory.

#!/bin/bash
tail -F -n0 /var/log/vsftpd.log | while read line; do
if echo "$line" | grep -q 'OK UPLOAD:'; then
filename=$(echo $line | cut -d, -f2 |awk '{print $1}')
filename="${filename%\"}"
filename="${filename#\"}"
#sleep 1s
if [ -s $filename ]; then
# do something with $filename
echo $filename
fi
fi
done

Using ghotis work
Here is what I did to get the users free space:
#!/bin/bash
tail -F -n 1 /var/log/vsftpd.log | while read line; do
if echo "$line" | grep -q 'OK LOGIN:'; then
pid=$(sed 's/.*\[\([^]]*\)\].*/\1/g' <<< "$line")
#the operator '<<<' doesnt exist in dash so use bash
if [[ $pid != *"pid"* ]]; then
echo -e "Disk 1: Contains Games:\n" > /home/vftp/"$pid"/FreeSpace.txt; df -h /media/Disk1/ >> /home/vftp/"$pid"/FreeSpace.txt
echo -e "\r\n\r\nIn order to read this properly you need to use a text editor that can read *nix format files" >> /home/vftp/"$pid"/FreeSpace.txt
fi
echo "checked"
# awk '{ sub("\r$", ""); print }' /home/vftp/"$pid"/FreeSpace.txt > /home/vftp/"$pid"/FreeSpace.txt
fi
done

If the file is added through an HTTP upload, and if your server is apache, you might want to check mod_security.
It enables you to run a script for every upload made through HTTP POST.

Related

How to delete files after using grep function

I have the command below:
grep -rnw '/root/serviceDown/' -e "The service 'httpd' on server is currently down"
and the result is as follows:
/root/serviceDown/2946/000.conf:5:subject=The service 'httpd' on server is currently down
/root/serviceDown/2955/000.conf:5:subject=The service 'httpd' on server is currently down
How to write a script which deletes those files after the grep command and then restarts the server?
This probably is what you are looking for:
grep -lr "The service 'httpd' on server is currently down" /root/serviceDown/ 2>/dev/null | xargs rm
The -n and -w flags do not really make sense for your purpose, the additional information they produce is only in the way here. The -e flag is also not required as far as I can tell, you do not need an extended pattern interpretation for the string you use. The -l flag reduces the output to the name of matching files. You filter out the error output using 2>/dev/null and finally pipe the resulting list of files into the xargsutility which uses a simple rm command to delete the files.
Restarting the server process afterwards can be done by whatever command you usually use for that, just execute it after the above command, either manually or separated by a simple ; in one go.
Obviously you need sufficient system permission to be able to perform both commands...
For more advanced processing of the result as you ask in your comment below I suggest you implement a simple script. This offers much more flexibility, is easier to read and maintain and also allows execution as a single command.
This might be a starting point for you:
#!/bin/bash
# fetch list of matching files
list=`grep -lr "The service 'httpd' on server is currently down" /root/serviceDown/ 2>/dev/null`
if [[ -z "$list" ]]; then
echo "No files matched, nothing to be done...";
exit
fi
# delete files one by one
for match in $list
do
echo "Removing matched file $match..."
echo `rm $match`
done
# restart server process
echo "Restarting server process..."
`service httpd restart`
# that's is, basically
echo "...done."
Save that script into some folder inside your PATH environment variable (e.g./root/bin/restartFailedHttpdServer), make it executable (chmod u+x /root/bin/restartFailedHttpdServer) and finally execute it (restartFailedHttpdServer).

Is there a way to perform a "tail -f" from an url?

I currently use tail -f to monitor a log file: this way I get an autorefreshing console monitoring a web server.
Now, said webserver was moved to another host and I have no shell privileges for that.
Nevertheless I have a .txt network path, which in the end is a log file which is constantly updated.
So, I'd like to do something like tail -f, but on that url.
Would it be possible?In the end "in linux everything is a file" so..
You can do auto-refresh with help of watch combined with wget.
It won't show history, like tail -f, rather update screen like top.
Example of command, that shows content on file.txt on the screen, and update output every five seconds:
watch -n 5 wget -qO- http://fake.link/file.txt
Also, you can output n last lines, instead of the whole file:
watch -n 5 "wget -qO- http://fake.link/file.txt | tail"
In case if you still need behaviour like "tail -f" (with keeping history), I think you need to write a script that will download log file each time period, compare it to previous downloaded version, and then print new lines. Should be quite easy.
I wrote a simple bash script to fetch URL content each 2 seconds and compare with local file output.txt then append the diff to the same file
I wanted to stream AWS amplify logs in my Jenkins pipeline
while true; do comm -13 --output-delimiter="" <(cat output.txt) <(curl -s "$URL") >> output.txt; sleep 2; done
don't forget to create empty file output.txt file first
: > output.txt
view the stream :
tail -f output.txt
original comment : https://stackoverflow.com/a/62347827/2073339
UPDATE:
I found better solution using wget here:
while true; do wget -ca -o /dev/null -O output.txt "$URL"; sleep 2; done
https://superuser.com/a/514078/603774
I've made this small function and added it to the .*rc of my shell. This uses wget -c, so it does not re-download the whole page:
# Poll logs continuously over HTTP
logpoll() {
FILE=$(mktemp)
echo "———————— LOGPOLLING TO $FILE ————————"
tail -f $FILE &
tail_pid=$!
bg %1
stop=0
trap "stop=1" SIGINT SIGTERM
while [ $stop -ne 1 ]; do wget -co /dev/null -O $FILE "$1"; sleep 2; done
echo "——————————— LOGPOLL DONE ————————————"
kill $tail_pid
rm $FILE
trap - SIGINT SIGTERM
}
Explanation:
Create a temporary logfile using mktemp and save its path to $FILE
Make tail -f output the logfile continuously in the background
Make ctrl+c set stop to 1 instead of exiting the function
Loop until stop bit is set, i.e. until the user presses ctrl+c
wget given URL in a loop every two seconds:
-c - "continue getting partially downloaded file", so that wget continues instead of truncating the file and downloading again
-o /dev/null - wget's log messages shall be thrown into the void
-O $FILE - output the contents to the temp logfile we've created
Clean up after yourself: kill the tail -f, delete the temporary logfile, unset the signal handlers.
The proposed solutions periodically download the full file.
To avoid that I've created a package and published in NPM that does a HEAD request ( getting the size of the file ) and requesting only the last bytes.
Check it out and let me know if you need any help.
https://www.npmjs.com/package/#imdt-os/url-tail

scp: how to find out that copying was finished

I'm using scp command to copy file from one Linux host to another.
I run scp commend on host1 and copy file from host1 to host2. File is quite big and it takes for some time to copy it.
On host2 file appears immediately as soon as copying was started. I can do everything with this file even if copying is still in progress.
Is there any reliable way to find out if copying was finished or not on host2?
Off the top of my head, you could do something like:
touch tinyfile
scp bigfile tinyfile user#host:
Then when tinyfile appears you know that the transfer of bigfile is complete.
As pointed out in the comments, this assumes that scp will copy the files one by one, in the order specified. If you don't trust it, you could do them one by one explicitly:
scp bigfile user#host:
scp tinyfile user#host:
The disadvantage of this approach is that you would potentially have to authenticate twice. If this were an issue you could use something like ssh-agent.
On sending side (host1) use script like this:
#!/bin/bash
echo 'starting transfer'
scp FILE USER#DST_SERVER:DST_PATH
OUT=$?
if [ $OUT = 0 ]; then
echo 'transfer successful'
touch successful
scp successful USER#DST_SERVER:DST_PATH
else
echo 'transfer faild'
fi
On receiving side (host2) make script like this:
#!/bin/bash
SLEEP_TIME=30
MAX_CNT=10
CNT=0
while [[ ! -e successful && $CNT < $MAX_CNT ]]; do
((CNT++))
sleep($SLEEP_TIME);
done;
if [[ -e successful ]]; then
echo 'successful'
rm successful
# do somethning with FILE
fi
With CNT and MAX_CNT you disable endless loop (in case file successful isn't transferred).
Product MAX_CNT and SLEEP_TIME should be equal or greater expected transfer time. In my example expected transfer time is less than 300 seconds.
A checksum (md5sum, sha256sum ,sha512sum) of the local and remote files would tell you if they're identical.
For the situation where you don't have SSH access to the remote system - like an FTP server - you can download the file after it's uploaded and compare the checksums. I do this for files I send from production scripts at work. Below is a snippet from the script in which I do this.
MD5SRC=$(md5sum $LOCALFILE | cut -c 1-32)
MD5TESTFILE=$(mktemp -p /ramdisk)
curl \
-o $MD5TESTFILE \
-sS \
-u $FTPUSER:$FTPPASS \
ftp://$FTPHOST/$REMOTEFILE
MD5DST=$(md5sum $MD5TESTFILE | cut -c 1-32)
if [ "$MD5SRC" == "$MD5DST" ]
then
echo "+Local and Remote files match!"
else
echo "-Local and Remote files don't match"
fi
if you use inotify-tools,
then the solution will looks like this:
while ! inotifywait -e close $(dirname ${bigfile_fullname}) 2>/dev/null | \
grep -Eo "CLOSE $(basename ${bigfile_fullname})$">/dev/null
do true
done
echo "File ${bigfile_fullname} closed"
After some investigation, and discussion of the problem on other forums I have found one more solution. Maybe it can help somebody.
There is a command "lsof". It lists open files. During copying the file will be opened, so the command
lsof | grep filename
will return non empty result.
So you might want to make a while loop to wait until lsof returns nothing and proceed with your task.
Example:
# provide your file name here
f=<nameOfYourFile>
lsofresult=`lsof | grep $f | wc -l`
while [ $lsofresult != 0 ]; do
echo still copying file $f...
sleep 5
lsofresult=`lsof | grep $f | wc -l`
done; echo copying file $f is finished: `ls $f`
For the duplicate question, How to check if file has been scp 100% to the remote location , which was for an expect script, to know if a file is transferred completely, we can add expect 100% .. .. i.e something like this ...
expect -c "
set timeout 1
spawn scp user#$REMOTE_IP:/tmp/my.file user#$HOST_IP:/home/.
expect yes/no { send yes\r ; exp_continue }
expect password: { send $SCP_PASSWORD\r }
expect 100%
sleep 1
exit
"
if [ -f "/home/my.file" ]; then
echo "Success"
fi
If avoiding a second SSH handshake is important, you can use something like the following:
ssh host cat \> bigfile \&\& touch complete < bigfile
Then wait for the "complete" file to get created on the remote end.

root running cron task can't read .txt file generated by www-data user

I have a simple php page that writes a file to my server.
// open new file
$filename = "$name.txt";
$fh = fopen($filename, "w");
fwrite($fh, "$name".";"."$abbreviation".";"."$uid".";");
fclose($fh);
I then have a cron job that I know runs as root as test that and need that.
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root" 1>&2
exit 1
fi
The cronjob is a bash script that can detect the file exists, but it can't seem to read the contents of the file.
#!/bin/bash
######################################################
#### Loop through the files and generate coincode ####
######################################################
for file in /home/test/customcoincode/queue/*
do
echo $file
chmod 777 $file
echo "read file"
while read -r coinfile; do
echo $coinfile
echo "Assign variables from file"
#############################################
#### Set the variables to from the file #####
#############################################
coinName=$(echo $coinfile | cut -f1 -d\;)
coinNameAbreviation=$(echo $coinfile | cut -f2 -d\;)
UId=$(echo $coinfile | cut -f3 -d\;)
done < $file
echo "`date +%H:%M:%S` - $coinName : Your Kryptocoin is being compiled!"
echo $file
echo "copy $coinName file to generated directory"
cp -b $file /home/test/customcoincode/generatedCoins/$coinName.txt
echo "`date +%H:%M:%S` : Delete queue file"
# rm -f $file
done
echo $file recognises the file exists
echo $coinfile is blank
Yet when I nano ./coinfile.txt in terminal I can see clearly there is text in there
I run ls -l and I see that the file has the permissions
-rw-r--r-- 1 www-data www-data
I was under the impression that this would still mean the file can be read by other users?
Do I need to be able to execute the file if i am opening it and reading the contents?
Any advice would be greatly appreciated. I can expand and show my code if you want, but it was working before when I called a bash script to write the file... and that time it would save the file under root user with rwx for most and then could be read. But this then caused other issues in the php page, so is not an option.
You have:
while read -r coinfile; do
...
I see no indication that you're reading from $file. The command
read -r coinfile
will simply read from standard input (the -r merely affects the treatment of backslashes). In a cron job, if I recall correctly, standard input is empty or unavailable, which would explain why $coinfile is empty.
If you actually do read from $file -- for example, if your real code looks something like:
while read -r coinfile; do
...
done <$file
then you need to show us your entire script, or at least a self-contained version of it that exhibits the problem. Actually, you need to show us your entire script whether that's the problem or not.
http://sscce.org/

How can I cat a remote file to read the parameters in Bash?

How can I cat a remote file? Currently, it works for local files only.
#!/bin/bash
regex='url=(.*)'
# for i in $(cat /var/tmp/localfileworks.txt);
for i in $(cat http://localhost/1/downloads.txt);
do
echo $i;
# if [[ $i =~ $regex ]]; then
#echo ${BASH_REMATCH[1]}
#fi
done
cat: http://localhost/1/downloads.txt: No such file or directory
You can use curl:
curl http://localhost/1/downloads.txt
Instead of cat, which reads a file from the file-system, use wget -O- -q, which reads a document over HTTP and writes it to standard output:
for i in $(wget -O- -q http://localhost/1/downloads.txt)
(The -O... option means "write to the specified file", where - is standard output; the -q option means "quiet", and disables lots of logging that would otherwise go to standard error.)
Why are you using a URL to copy from the local machine? Can't you just cat directly from the file?
If you are doing this from a remote machine and not localhost, then as far as I know you can't pass a URL to cat.
I would try something like this:
scp username#hostname:/filepath/downloads.txt /dev/stdout
As someone else mentioned you could also use wget instead of scp.

Resources