root running cron task can't read .txt file generated by www-data user - linux

I have a simple php page that writes a file to my server.
// open new file
$filename = "$name.txt";
$fh = fopen($filename, "w");
fwrite($fh, "$name".";"."$abbreviation".";"."$uid".";");
fclose($fh);
I then have a cron job that I know runs as root as test that and need that.
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root" 1>&2
exit 1
fi
The cronjob is a bash script that can detect the file exists, but it can't seem to read the contents of the file.
#!/bin/bash
######################################################
#### Loop through the files and generate coincode ####
######################################################
for file in /home/test/customcoincode/queue/*
do
echo $file
chmod 777 $file
echo "read file"
while read -r coinfile; do
echo $coinfile
echo "Assign variables from file"
#############################################
#### Set the variables to from the file #####
#############################################
coinName=$(echo $coinfile | cut -f1 -d\;)
coinNameAbreviation=$(echo $coinfile | cut -f2 -d\;)
UId=$(echo $coinfile | cut -f3 -d\;)
done < $file
echo "`date +%H:%M:%S` - $coinName : Your Kryptocoin is being compiled!"
echo $file
echo "copy $coinName file to generated directory"
cp -b $file /home/test/customcoincode/generatedCoins/$coinName.txt
echo "`date +%H:%M:%S` : Delete queue file"
# rm -f $file
done
echo $file recognises the file exists
echo $coinfile is blank
Yet when I nano ./coinfile.txt in terminal I can see clearly there is text in there
I run ls -l and I see that the file has the permissions
-rw-r--r-- 1 www-data www-data
I was under the impression that this would still mean the file can be read by other users?
Do I need to be able to execute the file if i am opening it and reading the contents?
Any advice would be greatly appreciated. I can expand and show my code if you want, but it was working before when I called a bash script to write the file... and that time it would save the file under root user with rwx for most and then could be read. But this then caused other issues in the php page, so is not an option.

You have:
while read -r coinfile; do
...
I see no indication that you're reading from $file. The command
read -r coinfile
will simply read from standard input (the -r merely affects the treatment of backslashes). In a cron job, if I recall correctly, standard input is empty or unavailable, which would explain why $coinfile is empty.
If you actually do read from $file -- for example, if your real code looks something like:
while read -r coinfile; do
...
done <$file
then you need to show us your entire script, or at least a self-contained version of it that exhibits the problem. Actually, you need to show us your entire script whether that's the problem or not.
http://sscce.org/

Related

Why is a part of the code inside a (False) if statement executed?

I wrote a small script which:
prints the content of a file (generated by another application) on paper with a matrix printer
prints the same line into a backup file
removes the original file.
The script runs every minute by a cronjob and works fine as long as there are files to print. If there are no files to print, it prints an empty line on the matrix printer and in the backup file. I don't understand why this happens as i implemented an if statement which checks if there is a file to print before the print command is executed. This behaviour only happens if the script is executed by the cron and not if i execute it manually with ./script.sh. What's the reason of this? and how can i solve it?
Something i noticed on the side is that if I place an echo "hi" command in the script, its printed to the matrix printer and the backup file. I expected that its printed to the console console when it has no >> something behind. How does this work?
The script:
#!/bin/bash
# Make sure the backup directory exists
if [ ! -d /home/user/backup_logprint ]
then
mkdir /home/user/backup_logprint
fi
# Print the records if there are any
date=`date +%Y-%m-%d`
filename='_logprint_backup'
printer_path="/dev/usb/lp0"
if [ `ls /tmp/ | grep logprint | wc -l` -gt 0 ]
then
for f in `ls /tmp | grep logprint`
do
echo `cat /tmp/$f` >> "/home/user/backup_logprint/$date$filename"
echo `cat /tmp/$f` >> $printer_path
rm "/tmp/$f"
done
fi
There's no need for ls or an if statement. Just use a proper glob in the for loop, and if no file match, the loop won't be entered.
#!/bin/bash
# Don't check first; just let mkdir decide if
# anything actually needs to be created.
d=/home/user/backup_logprint
mkdir -p "$d"
filename=$(date +"$d/%Y-%m-%d_logprint_backup")
printer_path="/dev/usb/lp0"
# Cause non-matching globs to expand to an empty
# sequence instead of being treated literally.
shopt -s nullglob
for f in /tmp/*logprint*; do
cat "$f" > "$printer_path" && mv "$f" "$d"
done

Bash script to iterate contents of directory moving only the files not currently open by other process

I have people uploading files to a directory on my Ubuntu Server.
I need to move those files to the final location (another directory) only when I know these files are fully uploaded.
Here's my script so far:
#!/bin/bash
cd /var/uploaded_by_users
for filename in *; do
lsof $filename
if [ -z $? ]; then
# file has been closed, move it
else
echo "*** File is open. Skipping..."
fi
done
cd -
However it's not working as it says some files are open when that's not true. I supposed $? would have 0 if the file was closed and 1 if it wasn't but I think that's wrong.
I'm not linux expert so I'm looking to know how to implement this simple script that will run on a cron job every 1 minute.
[ -z $? ] checks if $? is of zero length or not. Since $? will never be a null string, your check will always fail and result in else part being executed.
You need to test for numeric zero, as below:
lsof "$filename" >/dev/null; lsof_status=$?
if [ "$lsof_status" -eq 0 ]; then
# file is open, skipping
else
# move it
fi
Or more simply (as Benjamin pointed out):
if lsof "$filename" >/dev/null; then
# file is open, skip
else
# move it
fi
Using negation, we can shorten the if statement (as dimo414 pointed out):
if ! lsof "$filename" >/dev/null; then
# move it
fi
You can shorten it even further, using &&:
for filename in *; do
lsof "$filename" >/dev/null && continue # skip if the file is open
# move the file
done
You may not need to worry about when the write is complete, if you are moving the file to a different location in the same file system. As long as the client is using the same file descriptor to write to the file, you can simply create a new hard link for the upload file, then remove the original link. The client's file descriptor won't be affected by one of the links being removed.
cd /var/uploaded_by_users
for f in *; do
ln "$f" /somewhere/else/"$f"
rm "$f"
done

Bash Script if a file exists and larger than loop

*Note i edited this so my final functioning code is below
Ok so I'm writing a bash script to backup our mysql database to a directory, delete the oldest backup if 10 exist, and output the results of the backup to a log so I can further create alerts if it fails. Everything works great except the if loop to output the results, thanks again for the help guys code is below!
#! /bin/bash
#THis creates a variable with the date stamp to add to the filename
now=$(date +"%m_%d_%y")
#This moves the bash shell to the directory of the backups
cd /dbbkp/backups/
#Counts the number of files in the direstory with the *.sql extension and deletes the oldest once 10 is reached.
[[ $(ls -ltr *.sql | wc -l) -gt 10 ]] && rm $(ls -ltr *.sql | awk 'NR==1{print $NF}')
#Moves the bash shell to the mysql bin directory to run the backup script
cd /opt/GroupLink/everything_HelpDesk/mysql/bin/
#command to run and dump the mysql db to the directory
./mysqldump -u root -p dbname > /dbbkp/backups/ehdbkp_$now.sql --protocol=socket --socket=/tmp/GLmysql.sock --password=password
#Echo the results to the log file
#Change back to the directory you created the backup in
cd /dbbkp/backups/
#If loop to check if the backup is proper size and if it exists
if find ehdbkp_$now.sql -type f -size +51200c 2>/dev/null | grep -q .; then
echo "The backup has run successfully" >> /var/log/backups
else
echo "The backup was unsuccessful" >> /var/log/backups
fi
Alternatively, you could use stat instead of find.
if [ $(stat -c %s ehdbkp_$now 2>/dev/null || echo 0) -gt 51200 ]; then
echo "The backup has run successfully"
else
echo "The backup was unsuccessful"
fi >> /var/log/backups
Option -c %s tells stat to return the size of file in bytes. This will take care of both the presence of file and size greater than 51200. When the file is missing, stat will err out, thus we redirect error message to /dev/null. The logical or condition || will get executed only when the file is missing thus the comparison will make [ 0 -gt 100 ] false.
To check if the file exists and larger than 51200 bytes you could rewrite your if like this:
if find ehdbkp_$now -type f -size +51200c 2>/dev/null | grep -q .; then
echo "The backup has run successfully"
else
echo "The backup has was unsuccessful"
fi >> /var/log/backups
Other notes:
The find takes care two things at once: checks if file exists and size is greater than 51200.
We redirect stderr to /dev/null to hide the error message if the file doesn't exist.
If there was a file matching both conditions, then grep will match and exit with success, otherwise it will exit with failure
The final outcome of the grep is what decides the if condition
I moved the >> /var/log/backups after the closing fi, as it's equivalent this way and less duplication.
Btw if is NOT a loop, it's a conditional.
UPDATE
As #glennjackman pointed out, a better way to write the if, without grep:
if [[ $(find ehdbkp_$now -type f -size +51200c 2>/dev/null) ]]; then
...

For every file modification copy it into another file bash

I want to run service to listen on file modifying and for every add to file delete it from file and append to another file
I tried this code but it is not working it like going in infinite loop
inotifywait -m -e modify "$1" |
while read folder eventlist eventfile
do
cat "$1">>$DESTINATION_FILE
>$1
done
Each time you truncate the file, that registers as a modification, which triggers another truncation, etc. Try testing if the file contains anything in the body of the loop.
inotifywait -m -e modify "$1" |
while read folder eventlist eventfile
do
# Only copy-and-clear if the file is not empty
if [ -s "$1" ]; then
cat "$1" >> "$DESTINATION_FILE"
# What if the file is modified here?
>$1
fi
done
See my comment between cat and the truncation. You would never put those modifications in $DESTINATION_FILE, because you would erase them before the next iteration of the loop. This isn't really avoidable, unless your operating system allows you to obtain a lock on $1 prior to the cat, then release the lock after the truncation, so that only one process can write to the file at a time.
As pointed out by chepner, the reverting of the changes will also be treated as file modify.
A way out is:
remove -m parameter
Manually implement a while loop in bash
e.g.
cp "$1" "$1.bak"
while true; do inotifywait -e modify "$1" | {
read folder eventlist eventfile;
cat "$1" >> "$DESTINATION_FILE";
# OR
# diff "$1" "$1.bak" >> "$DESTINATION_FILE";
cp "$1.bak" "$1";
}
done
Note: I haven't tested above code myself.
Note2: There may be atomicity issues. There are times when the file modifications are not being monitored. Hence, when this cat > operation or cp operations are in progress, someone may attempt to write to "$1" file, which will be missed.

Storing directory as a variable for later use in linux script

In my script, I am holding the location (path) of a file as a variable.
For example, fileA
An example of its contents are
fileA=/usr/anotherfolder/somefold/"filenamehere"
However, when i call a command on the file in the script such as:
cat $fileA
or
cat "$fileA"
I get an error saying the file or directory doesn't exist. If I echo $fileA to see what the output is, and then run a cat manually from the terminal, it works fine, don't know what is going wrong. Any help?
Some debug info:
fileA='/home/jacob/Desktop/CS35L/WORK/2/hw/test3/"new"'
echo '/home/jacob/Desktop/CS35L/WORK/2/hw/test3/"new"'
/home/jacob/Desktop/CS35L/WORK/2/hw/test3/"new"
'[' '!' -r '/home/jacob/Desktop/CS35L/WORK/2/hw/test3/"new"' ']'
For these particular lines
Check for readable file
echo $fileA
if [ ! -r "$fileA" ]
then
o=`expr $o + 1`
echo "$fileA not readable."
continue
fi
If file name is new(not "new"), then change
fileA='/home/jacob/Desktop/CS35L/WORK/2/hw/test3/"new"'
to
fileA=/home/jacob/Desktop/CS35L/WORK/2/hw/test3/new

Resources