Passing arguments to at -f script - linux

I'm trying to schedule a bash script using at command on Linux.
at 22:20 -f /path/to/script.sh
Issuing the command above works just fine. However, the script requires some parameters. Adding Params behind the script path returns an error message:
at 22:20 -f /path/to/script.sh /arg/one argtwo argthree
syntax error. Last token seen: /
Yes, the first parameter passed to the script is another (absolute) path. My guess is, that at doesn't treat my script as a script but rather as a file, as at -help implies.
How can I work around that and add params to the script?

Specify the command to run on stdin, e.g. via a here string:
at 22:20 <<< "/path/to/script.sh /arg/one argtwo argthree"
Your command does not try to run /path/to/script.sh at a certain time. Instead, it reads and copies all the commands from /path/to/script.sh and runs those later. Since you're not invoking the script itself, it doesn't make sense to talk about arguments.

You're correct that it's taking just a filename, not a command. You should just run at 22:20, and then as input to that command, give the command you want run at 22:20 (i.e. /path/to/script.sh /arg/one argtwo argthree), and then Control-D on a separate line to mark the end of input. It should look a little like this ($ is my prompt):
$ at 22:20
/path/to/script.sh /arg/one argtwo argthree
job 11 at Thu May 27 22:20:00 2021
$
(Note that the "job 11 ..." is a confirmation message at printed.)

You can give any command as input without the -f:
at 22:20
/path/to/script.sh /arg/one argtwo argthree
^d

Related

Run a cron job now

I have a shell script called import.sh . This script will be used only once and will run for atleast 2 days.
I am able to schedule a cronjob like below.
02 10 25 7 * while IFS=',' read a;do /home/$USER/import.sh $a;done < /home/$USER/input/xaa
input.sh is the shell script
xaa is the file that contains arguments.
Now I want to run this script now.
I have tried ./import.sh xaa and sh -x import.sh xaa but If I run them in a terminal then I have to leave the terminal open for the time the script runs which might take more than 2 days.
How can I schedule the job to run now and terminate as soon as it finishes.
When using the command line interface for Linux, prefixing any command with nohup prevents the command from being aborted if you log out or exit the command line interface.
So you can do something like below.
nohup ./import.sh xaa

Run a cronjob at a specific time

I would like to run a specific script at a certain time (only once!). If I run it normally like this:
marc#Marc-Linux:~/tennis_betting_strategy1/wrappers$ Rscript write_csv2.R
It does work. I however would like to program it in a cronjob to run at 10:50 and therefore did the following:
50 10 11 05 * Rscript ~/csv_file/write_csv.R
This does not seem to work however. Any thoughts where I go wrong? These are the details of the cron package im
using:
PID COMMAND
1015 cron
My system time also checks out:
marc#Marc-Linux:~/tennis_betting_strategy1/wrappers$ date
wo mei 11 10:56:46 CEST 2016
There is a special tool for running commands only once - at.
With at you can schedule a command like this:
at 09:05 am today
at> enter you commands...
Note, you'll need the atd daemon running.
Your crontab entry looks okay, however. I'd suggest checking if the cron daemon is running(exact daemon name depends on the cron package; it could be cron, crond, or vixie-cron, for instance). One way to check if the daemon is running is to use the ps command, e.g.:
$ ps -C cron -o pid,args
PID COMMAND
306 /usr/sbin/cron
Some advices.
Read more about the PATH variable. Notice that it is set differently in interactive shells (see your ~/.bashrc) and in cron or at jobs. See also this about Rscript.
Replace your command by a shell script, e.g. in ~/bin/myrscriptjob.sh
That myrscriptjob.sh file should start with #!/bin/sh
Be sure to make that shell script executable:
chmod u+x ~/bin/myrscriptjob.sh
Add some logging in your shell script, near the start; either use logger(1) or at least some date(1) command suitably redirected, or even both:
#!/bin/sh
# file myrscriptjob.sh
/bin/date +"myrscriptjob starting %c %n" > /tmp/myrscriptjob.start
/usr/bin/logger -t myrscript job starting $$
/usr/local/bin/Rscript $HOME/csv_file/write_csv.R
in the last line above, replace /usr/local/bin/Rscript by the output of which Rscript done in some interactive terminal.
Notice that you should not use ~ (but replace them with $HOME when appropriate) in shell scripts.
Finally, use at to run your script once. If you want to run it periodically in a crontab job, give the absolute path, e.g.
5 09 11 05 * $HOME/bin/myrscriptjob.sh
and check in /tmp/myrscriptjob.start and in your system log if it has started successfully.
BTW, in your myrscriptjob.sh script, you might replace the first line #!/bin/sh with #!/bin/sh -vx (then the shell is verbose about execution, and cron or at will send you some email). See dash(1), bash(1), execve(2)
Use full path (started from /) for both Rscript and write_csv2.R. Check sample command as follows
/tmp/myscript/Rscript /tmp/myfile/write_csv2.R
Ensure you have execution permission of Rscript and write permission in folder where write_csv2.R will be created(/tmp/myfile)

Bash Script works but not in when executed from crontab

I am new to linux and the script below is just an example of my issue:
I have a script which works as expected when I execute it however when I set it to run via crontab it doesn't work as expected because it doesn't read the file content into the variable.
I have a file 'test.txt' which has 'abc' in it. My script puts the text into a variable 'var' and then I echo it out to a log file:
var=$(</home/pi/MyScripts/test.txt)
echo "$var" >/home/pi/MyScripts/log.log
This works perfectly fine when I execute it and it echo's into the log file but not when I set it via crontab:
* * * * * /home/pi/MyScripts/test.sh
The cron job runs, and it sent me the following error message:
/bin/sh: 1: /home/pi/MyScripts/test.sh: Permission denied.
But I have given it 777 permissions:
-rwxrwxrwx 1 pi pi 25 Jun 10 15:31 test.txt
-rwxrwxrwx 1 pi pi 77 Jun 10 15:34 test.sh
Any ideas?
This happens when you run the script with a different shell. It's especially relevant for systems where /bin/sh is dash:
$ cat myscript
echo "$(< file)"
$ bash myscript
hello world
$ sh myscript
$
To fix it, add #!/bin/bash as the first line in your script.
Others have provided answers, but I will give you a big clue from your error message; emphasis mine:
/bin/sh: 1: /home/pi/MyScripts/test.sh: Permission denied.
Note how the cron job was trying to use /bin/sh to run the script. That’s solved by always indicating which shell you want to use at the top of your script like this.
#!/bin/bash
var=$(</home/pi/MyScripts/test.txt)
echo "$var" >/home/pi/MyScripts/log.log
If your script is using bash, then you must explicitly set /bin/bash in some way.
Also, regarding permissions you say this:
But I have given it 777 permissions:
First, 777 permissions is a massive security risk. If you do that it means that anyone or anything on the system can read, write & execute the file. Don’t do that. In the case of a cron job the only entity that needs 7 permissions on a file is the owner of the crontab running that file.
Meaning if this is your crontab, just change the permissions to 755 which allows others to read & execute but not write. Or maybe better yet change it to 700 so only you—as the owner of the file—can do anything to the file. But avoid 777 permissions if you want to keep your system safe, stable & sane.
You have two options. In the first line of your file, tell what program you want to interpret the script
#!/bin/bash
...more code...
Or in your crontab, tell what program you want to interpret the script
* * * * * bash /home/pi/MyScripts/test.sh
In this option, you do not need to make the script executable

"Ambiguous output redirect" trying to send both stdout and stderr to mailx from a command sent to at

I have a bash script called test.sh which, for the sake of simplicity, prints one line to stdout and one line to stderr.
test.sh:
#!/bin/bash
echo "this is to stdout"
echo "this is to stderr" 1>&2
I want to run the script test.sh at 7:00 PM, but only if certain conditions are met. To this end, I have another bash script called schedule.sh, which checks some stuff and then submits the command to at to be run later.
I want the output of test.sh (both stdout and stderr) to be sent to me in an email. I use mailx to do this so I can get a nice subject name.
Furthermore, I want at to shut up. No output from at because it always sends me ugly emails (no subject line) if at produces any output.
schedule.sh:
#!/bin/bash
my_email="me#example.com" # Email is a variable
# Check some stuff, exit if certain conditions not met
echo "~/test.sh 2>&1 | mailx -s\"Cool title\" $my_email" | at 7:00 PM &> /dev/null
What's interesting is that when I run schedule.sh from cron (which runs the script with sh), it works perfectly. However, when I manually run schedule.sh from the terminal (NB: I'm using tcsh), at (not mailx) sends me an email saying
Ambiguous output redirect.
I'm not sure why the shell I run schedule.sh from makes a difference, when schedule.sh is a bash script.
Here is my thinking in looking at schedule.sh. Everything within the quotation marks "~/test.sh 2>&1 | mailx -s\"Cool title\" me#email.com" should be an argument to at, and at runs that argument as a command using sh. The redirection 2>&1 | is in the style of sh for this reason.
When I remove 2>&1 and only pipe the stdout of test.sh to mailx, it does work; however, I receive 2 emails: one with stdout from mailx and another from stderr from at.
What gives? How can I make this work regardless of the shell I'm calling it from?
Thanks.
edit:
uname -o says my OS is GNU/Linux
Here is uname -a if it helps:
Linux [hostname censored] 2.6.9-89.ELlargesmp #1 SMP Mon Jun 22 12:46:58 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
When I check the at contents using at -c, here's what I see:
#!/bin/sh
# atrun uid=xxxxx gid=xxxxx
# mail username 0
# ...
SHELL=/bin/tcsh; export SHELL
# ...
${SHELL:-/bin/sh} << `(dd if=/dev/urandom count=200 bs=1 2>/dev/null|LC_ALL=C tr -d -c '[:alnum:]')`
~/test.sh 2>&1 | mailx -s"Cool title" me#example.com
I'm having a hard time understanding the second to last line... is this going to execute using $SHELL or /bin/sh?
The command executed via at is:
~/test.sh 2>&1 | mailx -s\"Cool title\" $my_email
The behavior of at command varies from one system to another. On Linux, the command is executed using /bin/sh. In fact, on my system (Linux Mint 14), it prints a warning message:
$ echo 'printenv > at.env' | at 19:24
warning: commands will be executed using /bin/sh
On Solaris, the command is executed by the shell specified by the current value of the $SHELL environment variable. Using an account where my default shell is /bin/tcsh on Solaris 9, I get:
% echo 'printenv > at.env' | at 19:25
commands will be executed using /bin/tcsh
job 1397874300.a at Fri Apr 18 19:25:00 2014
% echo 'printenv > at.env' | env SHELL=/bin/sh at 19:28
commands will be executed using /bin/sh
job 1397874480.a at Fri Apr 18 19:28:00 2014
Given that at's behavior is inconsistent (and frankly confusing), I suggest having it execute just a single command, with any I/O redirection being performed inside that command. That's the best way to ensure that the command will be executed correctly regardless of which shell is used to execute it.
For example (untested code follows):
echo '#!/bin/bash' > tmp.bash
echo "~/test.sh 2>&1 | mailx -s\"Cool title\" $my_email" >> tmp.bash
chmod +x tmp.bash
echo "./tmp.bash" | at 7:00 PM

Shell script to log server checks runs manually, but not from cron

I'm using a basic shell script to log the results of top, netstat, ps and free every minute.
This is the script:
/scripts/logtop:
TERM=vt100
export TERM
time=$(date)
min=${time:14:2}
top -b -n 1 > /var/log/systemCheckLogs/$min
netstat -an >> /var/log/systemCheckLogs/$min
ps aux >> /var/log/systemCheckLogs/$min
free >> /var/log/systemCheckLogs/$min
echo "Message Content: $min" | mail -s "Ran System Check script" email#domain.com
exit 0
When I run this script directly it works fine. It creates the files and puts them in /var/log/systemCheckLogs/ and then sends me an email.
I can't, however, get it to work when trying to get cron to do it every minute.
I tried putting it in /var/spool/cron/root like so:
* * * * * /scripts/logtop > /dev/null 2>&1
and it never executes
I also tried putting it in /var/spool/cron/myservername and also like so:
* * * * * /scripts/logtop > /dev/null 2>&1
it'll run every minute, but nothing gets created in systemCheckLogs.
Is there a reason it works when I run it but not when cron runs it?
Also, here's what the permissions look like:
-rwxrwxrwx 1 root root 326 Jul 21 01:53 logtop
drwxr-xr-x 2 root root 4096 Jul 21 01:51 systemCheckLogs
Normally crontabs are kept in "/var/spool/cron/crontabs/". Also, normally, you update it with the crontab command as this HUPs crond after you're done and it'll make sure the file gets in the correct place.
Are you using the crontab command to create the cron entry? crontab to import a file directly. crontab -e to edit the current crontab with $EDITOR.
All jobs run by cron need the interpreter listed at the top, so cron knows how to run them.
I can't tell if you just omitted that line or if it is not in your script.
For example,
#!/bin/bash
echo "Test cron jon"
When running from /var/spool/cron/root, it may be failing because cron is not configured to run for root. On linux, root cron jobs are typically run from /etc/crontab rather than from /var/spool/cron.
When running from /var/spool/cron/myservername, you probably have a permissions problem. Don't redirect the error to /dev/null -- capture them and examine.
Something else to be aware of, cron doesn't initialize the full run environment, which can sometimes mean you can run it just fine from a fully logged-in shell, but it doesn't behave the same from cron.
In the case of above, you don't have a "#!/bin/shell" up top in your script. If root is configured to use something like a regular bourne shell or cshell, the syntax you use to populate your variables will not work. This would explain why it would run, but not populate your files. So if you need it to be ksh, "#!/bin/ksh". It's generally best not to trust the environment to keep these things sane. If you need your profile run the do a ". ~/.profile" up front as well. Or a quick and dirty way to get your relatively full env is to do it from su as such "* * * * * su - root -c "/path/to/script" > /dev/null 2>&1
Just some things I've picked up over the years. You're definitely expecting a ksh based on your syntax, so you might want to be sure it's using it.
Thanks for the tips... used a little bit of each answer to get to the bottom of this.
I did have the interpreter at the top (wasn't shown here), but may have been wrong.
Am using #!/bin/bash now and that works.
Also had to tinker with the permissions of the directory the log files are being dumped in to get things working.

Resources