Command running on terminal but not on crontab - cron

I'm trying to run the following command on crontab but for some reason it cuts of a portion of the command. when I check /var/logs/cron. However, it runs when I run it on the terminal.
Command in crontab:
*/30 * * * * user find /home/user/recordings -name '*.pcap,SDPerr' -exec sh -c 'mv "$0" "${0%.pcap,SDPerr}.pcap"' {} \;
from /var/logs/cron:
Jan 10 11:00:01 server CROND[116349]: (user) CMD ( find /home/user/recordings -name '*.pcap,SDPerr' -exec sh -c 'mv "$0" "${0)
What am I missing here, any help would be appreciated.

Your command has a % (percent sign) in it, which has special meaning in crontab. Therefore better to put "\" before "%" to escape it.

Related

Delete old directories using cronjob is not working in LINUX

I have test this deleting directories command in LINUX terminal and it is working fine.
find /home/TEST_/ -maxdepth 0 -mtime +6 -exec rm -r {} ;
printf "deleted IPSIM old directory"
But when i set cronjob to cleanup the directories, i get error as below.
find: missing argument to `-exec'
deleted IPSIM old directory
Crontab:
00 00 * * 3 cd /home/cronjob; sh cleanup_regress_SIM.sh
Can someone help with this and correct me where I am going wrong?
The correct way to put that script command in crontab is:
00 00 * * 3 /home/cronjob/cleanup_regress_SIM.sh
In more detail:
You do not need to use cd, just specify the full path to the bash script.
#!/bin/sh or #!/bin/bash is already defined in the beginning of the script so it will run in the correct environment. No need to use sh
I have tested a copy of this on my own system and it works fine. I don't know what kind of system you run that throws these errors. Here is what works.
test.sh (slightly different)
#!/bin/bash
echo $(find /home/gerge/Documents/Arduino/wifi* -maxdepth 0 -mtime +30 -exec ls {} \;)>>/home/gerge/test.log
echo "command done">>/home/gerge/test.log
crontab (run every minute for testing)
*/1 * * * * /home/gerge/test.sh
The content of test.log
wifiConfigPortal.ino wifiRelayLogin.ino wifiRGB.ino wifiRGBsimple.ino
command done
wifiConfigPortal.ino wifiRelayLogin.ino wifiRGB.ino wifiRGBsimple.ino
command done
wifiConfigPortal.ino wifiRelayLogin.ino wifiRGB.ino wifiRGBsimple.ino
command done
I would recommend checking if other scripts are able to run as cronjobs. If you get the same error there is some bigger issue.

bash - find all .bashrc files and append to them

I need to find all .bashrc files and append "MYSQL_HISTFILE=/dev/null" to it, to remediate an issue. There are alot of .bashrc files, so can I do something like:
find / -type f -name ".bashrc" -exec echo "export MYSQL_HISTFILE=/dev/null" >> {} \;
>> is executed by the original shell process, it can't use substitution from find. And find doesn't run its command through a shell, so it can't do output redirection itself.
You need to execute bash explicitly so you can use redirection in the command.
find / -type f -name '.bashrc' -exec bash -c 'echo export MYSQL_HISTFILE=/dev/null >> "{}"' \;

Why isn't this cron doing anything?

So, I have a very simple cron set up to run daily. It does a find and rsync with certain parameters. When it runs on the bash command line, it runs just fine, but when in the root crontab, it doesn't want to know. Any ideas what is wrong here?
/usr/bin/find /var/www/*/logs/ -iname '*.lzma' -mtime +21 -exec rsync -a --ignore-existing --relative -e 'ssh -q -p 2230 -o "StrictHostKeyChecking no"' {} root#nas0:/space/Logs/reporting0/ \;
Syslog shows it ran:
Apr 28 09:40:01 reporting1 CRON[26347]: (root) CMD (/usr/bin/find /var/www/*/logs/ -iname '*.lzma' -mtime +21 -exec rsync -a --ignore-existing --relative -e 'ssh -q -p 2230 -o "StrictHostKeyChecking no"' {} root#nas0:/space/Logs/reporting1/ \;)
But nothing actually gets copied.
Cron always runs with a mostly empty environment. HOME, LOGNAME, and
SHELL are set; and a very limited PATH.
link here
So you can complete all application with the full path or add the environment variables.
For example in Ubuntu you can
replace rsync by /usr/bin/rsync
repalce ssh by /usr/bin/ssh
You can check your cron's environment variable by
add this to cron and check the /tmp/env.output
* * * * * env > /tmp/env.output
here is detail

Bash script not deleting files in given directory

I found this bash script online that I want to use to delete files older than 2 days:
#!/bin/bash
find /path/to/dir -type f -mtime +2 -exec rm {} \;
I setup a cronjob to run the script (I set it a couple of minutes ahead for testing, but it should run once every 24 hours)
54 18 * * * /path/to/another/dir/script.sh
I exit correct so it updates the cronjob.
Why does it not delete the files in the directory?
What if you try dumping an echo at the end of the script and log the output
cron1.sh >> /var/log/cron1.log
You could try this but I'm not sure it will work
--exec rm -rf {}
Most cron jobs do not have PATH set. You must fully qualify the find command.
#!/bin/bash
/usr/bin/find /path/to/dir -type f -mtime +2 -exec rm {} \;
If you capture the stdout and stderr as recommended by damienfrancois, you'd probably see the message "command not found: find". If you didn't capture the stdout and stderr, cron usually will send the output to the cron job owner's email, unless configured not to do so.

command runs in terminal but not via /bin/sh

If I run this command it works fine in the terminal:
for dirname in $(ls -d dir/checkpoint/features.txt/20*);do;echo "hello";done
But when run through /bin/sh -c it gives an error
/bin/sh -c "for dirname in $(ls -d dir/checkpoint/features.txt/20*);do;echo "hello";done"
ERROR:
/bin/sh: -c: line 1: syntax error near unexpected token `dir/checkpoint/features.txt/201108000'
/bin/sh: -c: line 1: `dir/checkpoint/features.txt/201108000'
My default shell is /bin/bash. I cant seem to understand what is causing this. My default implementation for running all shell commands in my program is by appending /bin/sh -c to them. It is the first time i am seeing this issue. Any suggestions?
Don't try to parse the output of ls, especially with a for construct. There are many, many ways that this can go wrong.
This is a good place to use find instead. Try this:
/bin/sh -c "find dir/checkpoint/features.txt -mindepth 1 -maxdepth 1 -type d -iname '20*' -exec echo \"hello\" \;"
Besides eliminating the error-prone use of ls, you avoid the sub-shell and all of the issues that it brings with it.
Follow-up in response to your comment:
I'm assuming that you're using awk -F/ '{print $NF}' to grab the name of the folder in which the file lives (that is, the last directory name before the filename). The commands basename and dirname can be used to do this for you. This should make your script a bit easier. Place the following into a script file:
#!/bin/sh
folder=$(basename $(dirname $1))
mkdir -p #{nfs_checkpoint}/${folder}
cat #{result_location}/${folder}/20* > #{nfs_checkpoint}/${folder}/features.txt
And execute it like this:
/bin/sh -c "find dir/checkpoint/features.txt -mindepth 1 -maxdepth 1 -type d -iname '20*' -exec yourscript.sh {} \;"

Resources