bash script runs from shell but not from cron job - linux

Cron installation is vixie-cron
/etc/cron.daily/rmspam.cron
#!/bin/bash
/usr/bin/rm /home/user/Maildir/.SPAM/cur/*;
I Have this simple bash script that I want to add to a cron job (also includes spam learning commands before) but this part always fails with "File or directory not found" From what I figure is the metachar isn't being interperted correctly when run as a cron job. If I execute the script from the commandline it works fine.
I'd like a why for this not working and of course a working solution :)
Thanks
edit #1
came back to this question when I got popular question badge for it. I first did this,
#!/bin/bash
find /home/user/Maildir/.SPAM/cur/ -t file | xargs rm
and just recently was reading through the xargs man page and changed it to this
#!/bin/bash
find /home/user/Maildir/.SPAM/cur/ -t file | xargs --no-run-if-empty rm
short xargs option is -r

If there are no files in the directory, then the wildcard will not be expanded and will be passed to the command directly. There is no file called "*", and then the command fails with "File or directory not found." Try this instead:
if [ -f /home/user/Maildir/.SPAM/cur/* ]; then
rm /home/user/Maildir/.SPAM/cur/*
fi
Or just use the "-f" flag to rm. The other problem with this command is what happens when there is too much spam for the maximum length of the command line. Something like this is probably better overall:
find /home/user/Maildir/.SPAM/cur -type f -exec rm '{}' +
If you have an old find that only execs rm one file at a time:
find /home/user/Maildir/.SPAM/cur -type f | xargs rm
That handles too many files as well as no files. Thanks to Charles Duffy for pointing out the + option to -exec in find.

Are you specifying the full path to the script in the cronjob?
00 3 * * * /home/me/myscript.sh
rather than
00 3 * * * myscript.sh
On another note, it's /bin/rm on all of the linux boxes I have access to. Have you double-checked that it really is /usr/bin/rm on your machine?

try adding
MAILTO=your#email.address
to the top of your cron file and you should get any input/errors mailed to you.
Also consider adding the command as a cronjob
0 30 * * * /usr/bin/rm /home/user/Maildir/.SPAM/cur/*

Try using a force option and forget about adding a path to rm command. I think it should not be needed...
rm -f
This will ensure that even if there are no files in the directory, rm command will not fail. If this is a part of a shell script, the * should work. It looks to me that you might have an empty dir...
I understand that the rest of the script is being executed, right?

Is rm really located in /usr/bin/ on your system? I have always thought that rm should reside in /bin/.

Related

How to replace backup file with timestamp in its name without producing duplicates in Linux Bash (shell script)

#!/usr/bin/env bash
# usage: wttr [location], e.g. wttr Berlin, wttr New\ York
# Standard location if no parameters were passed
location=''
language=''
time=`date`
# Expand terminal display
if [ -z "$language" ]; then
language=${LANG%_*}
fi
curl \
-H -x "Accept-Language: ${language}" \
-x wttr.in/"${1:-${location}}" |
head -n 7 |
tee /home/of/weather.txt |
tee -a /home/of/weather.log |
tee /home/of/BACKUP/weather_"$time".txt
#cp weather.txt /home/of/BACKUP
#mv -f /home/of/BACKUP/weather.txt /home/of/BACKUP/weather_"$time".txt
I'm very new to Linux Bash and Shell scripting and can't figure out the following.
I have a problem with the shell script above.
It works fine so far (curling ASCII data from website and writing it to weather.txt and .log).
It is also in set in crontab to run every 5 minutes.
Now I need to make a backup of weather.txt under /home/of/, in /home/of/BACKUP with the filename weather_<timestamp>.txt.
I tried to delete (rm weather*.txt) the old timestamped files in /home/of/BACKUP and then copy and rename the file everytime the cronjob is running.
I tried piping cp and mv and so on but somehow I end up with producing many duplicates as due to the timestamp the filenames are different or nothing at all when I try to delete the content of the folder first.
All I need is ONE backup file of weather.txt as weather_<timestamp>.txt which gets updated every 5 minutes with the actual timestamp bit I can't figure it out.
If I understand your question at all, then simply
rm -f /home/of/BACKUP/weather_*.txt
cp /home/of/weather.txt /home/of/BACKUP/weather_"$time".txt
cp lets you rename the file you are copying to; it doesn't make sense to separately cp and then mv.
For convenience, you might want to cd /home/of so you don't have to spell out the full paths, or put them in a variable.
dir=/home/of
rm -f "$dir"/BACKUP/weather_*.txt
cp "$dir"/weather.txt "$dir"/BACKUP/weather_"$time".txt
If you are running out of the cron of the user named of then your current working directory will be /home/of (though if you need to be able to run the script manually from anywhere, that cannot be guaranteed).
Obviously, make sure the wildcard doesn't match any files you actually want to keep.
As an aside, you can simplify the tee commands slightly. If this should only update the files and not print anything to the terminal, you could even go with
curl \
-H -x "Accept-Language: ${language}" \
-x wttr.in/"${1:-${location}}" |
head -n 7 |
tee /home/of/weather.txt \
>>/home/of/weather.log
I took out the tee to the backup file since you are deleting it immediately after anyway. You could alternatively empty the backup directory first, but then you will have no backups if the curl fails.
If you want to keep printing to the terminal, too, probably run the script with redirection to /dev/null in the cron job to avoid having your email inbox fill up with unread copies of the output.

bash command to check if a directory is executable

My goal is to check if the the execution bit is not set for a directory.
I changed the permission of /tmp so that the execution bit is off.
root$: chmod 666 /tmp
root$: ls -l /
....
.....
drw-rw-rw- 12 root root 4096 Feb 29 15:17 tmp
In my bash script, I have tried the following without success:
if [ ! -x /tmp ]; then
......
I have experimented with all the suggestions at the following link, but the only different syntax approach does not work for me either:
if [[ ! -x /tmp] ; then
check if a file is executable
These work as expected for regular files, but not for any directory, but I don't know why. Any ideas?
Update #2
I wrote a mini bash script with only the code suggested in a comment below.
Results:
[root#mc/]# cat tst.sh
#!/bin/bash
if [ ! -x /tmp ]; then echo 'not executable!'; fi
exit
[root#mc/]# ./tst.sh
[root#mc/]#
All of the code that you have provided in your question is correct (I just finished testing it myself). It stands to reason, therefore, that something else in the script that is failing. If you could, try running this simplified snippet:
if [ ! -x /tmp ]; then echo 'not executable!'; fi
As a quick side note, the "executable" flag for directories in Unix systems does not actually mean "executable". It is actually the way that the directory is marked as searchable. While I'm not sure if this will help with the problem you are working on, it is an interesting usage of existing fields.
You can perhaps use the find command to single out any directory without the executable bit
notex=$(find . -type d -maxdepth 1 -perm 666)
I think that may help..

cron job to find a file and list its output if found

I am trying to get a cron job setup to run every 10 mins to find 2 particular file(lets say a and b) and if found cat its output and the timestamp when the file was created and send it as an email in suse linux.
could anyone please suggest .
Thank you
Jonu Joy
Assuming that mail-delivery as such works, and that you know how to edit crontabs ...
Put the following into a script (modify paths to match your system, I don't have suse here to play with), make it executable, and run that from cron every ten minutes.
#!/bin/bash
find . -name a -o -name b|while read file; do ls -l $file; cat $file; echo "" ; done | mail user#domain
And then:
chmod +x /path/to/script/above
Run from cron like so:
0/10 * * * * /path/to/script/above

Groups of commands in Linux shell

Suppose I have this shell script call cpdir:
(cd $1 ; tar -cf - . ) | (cd $2 ; tar -xvf - )
When I ran it, the main shell should create two processes (subshells) to execute both groups of commands concurrently. However, how can the shell make sure that both processes change to appropriate directories, then first process package the content of the directory and send to the second process for unpacking?
Why is there no race condition? Is it a rule that every command of every process will execute in order, although processes can be parallel?
i.e. first process will run "cd $1", and then second process will run "cd $2" (or it should be execute the same time as the first process? Not sure), then first process will package everything and finally send to second process.
Although, one little thing I don't know about tar:
tar -cf - .
I know the dot (.) is the content of current directory. However, what's the '-' in the command?
You don't need to use cd because tar has a -C option which tells it to change to a directory. So you can simply use a command such as:
tar -C $1 -cvf - . | tar -C $2 -xvf -
- means stdin/stdout. The first hyphen tells tar to write to stdout. The second one tells tar to read from stdin.
Since - is the default, you don't even need to specify it. You can shorten your command to:
tar -C $1 -c . | tar -C $2 -x
As those groups run in independent processes, it doesn't matter which cd command runs first: each process has its own working directory.
So changing working directory does not affect the respectively other process.
You are piping the commands in your case. The result you expect is not very clear to me.
By the way, my GNU tar has no "-" value for this "-f" option.
So you commands might be not portable.

bash - how to pipe result from the which command to cd

How could I pipe the result from a which command to cd?
This is what I am trying to do:
which oracle | cd
cd < which oracle
But none of them works.
Is there a way to achieve this (rather than copy/paste of course)?
Edit : on second thought, this command would fail, because the destination file is NOT a folder/directory.
So I am thinking and working out a better way to get rid of the trailing "/oracle" part now (sed or awk, or even Perl) :)
Edit :
Okay that's what I've got in the end:
cd `which oracle | sed 's/\/oracle//g'`
You use pipe in cases where the command expects parameters from the standard input. ( More on this ).
With cd command that is not the case. The directory is the command argument. In such case, you can use command substitution. Use backticks or $(...) to evaluate the command, store it into variable..
path=`which oracle`
echo $path # just for debug
cd $path
although it can be done in a much simpler way:
cd `which oracle`
or if your path has special characters
cd "`which oracle`"
or
cd $(which oracle)
which is equivalent to backtick notation, but is recommended (backticks can be confused with apostrophes)
.. but it looks like you want:
cd $(dirname $(which oracle))
(which shows you that you can use nesting easily)
$(...) (as well as backticks) work also in double-quoted strings, which helps when the result may eventually contain spaces..
cd "$(dirname "$(which oracle)")"
(Note that both outputs require a set of double quotes.)
With dirname to get the directory:
cd $(which oracle | xargs dirname)
EDIT: beware of paths containing spaces, see #anishpatel comment below
cd `which oracle`
Note those are backticks (generally the key to the left of 1 on a US keyboard)
OK, here a solution that uses correct quoting:
cd "$(dirname "$(which oracle)")"
Avoid backticks, they are less readable, and always quote process substitutions.
You don't need a pipe, you can do what you want using Bash parameter expansion!
Further tip: use "type -P" instead of the external "which" command if you are using Bash.
# test
touch /ls
chmod +x /ls
cmd='ls'
PATH=/:$PATH
if cmdpath="$(type -P "$cmd")" && cmdpath="${cmdpath%/*}" ; then
cd "${cmdpath:-/}" || { echo "Could not cd to: ${cmdpath:-/}"; exit 1; }
else
echo "No such program in PATH search directories: ${cmd}"
exit 1
fi
besides good answer above, one thing needs to mention is that cd is a shell builtin, which run in the same process other than new process like ls which is a command.
https://unix.stackexchange.com/questions/50022/why-cant-i-redirect-a-path-name-output-from-one-command-to-cd
http://en.wikipedia.org/wiki/Shell_builtin
In response to your edited question, you can strip off the name of the command using dirname:
cd $(dirname `which oracle`)

Resources