I am planning to run some bash scripts every minute, and I wrote:
* * * * * bash ~/Dropbox/temp_scripts/run_all_scripts
in crontab.
It was supposed to run every minute, but it did not work. Does anyone have idea why this happens?
Transferring a comment into an answer.
Add I/O redirection to the command line in the crontab entry:
>/tmp/run_all_scripts.out 2>/tmp/run_all_scripts.err
Review the contents of the files after a minute or two has passed. Consider recording the environment to see if that's part of the problem. And consider using bash -x instead of just bash.
If you still don't get anything (the files in /tmp are not created), then you've got issues with cron; the daemon isn't running, or your user does not have permission to use it (but crontab isn't telling you that), or you've not submitted your crontab to the program (what does crontab -l say?), or … whatever is really wrong.
Note, too, that the output from cron jobs is normally (well, at least sometimes — on Mac OS X for a system I currently use, and Solaris for another that I've used previously) emailed to the person whose job it is. You should review the email on the system.
Thank you! I have already fixed it! The reason why it does not work is I used "ls -a .sh" in the script, and when the crontab did not find any *.sh files in the folder it was executing. When modifying it to "ls -a $HOME/Dropbox/temp_scripts/.sh", everything works! This debugging technique is quite helpful!
It is, in many ways, the most basic of debugging techniques — make sure you see what is actually happening. If you're not sure why a shell script isn't working, make sure you can see that it is executing and what it is producing in the way of output, and (very often) make sure you can see what it is executing with bash -x or equivalent. (AFAIK, all shells support -x to trace the execution.)
Related
I am trying to find a way to record every single command that is executed by any user on the system.
Things that I have came across earlier.
It is possible to view shell commands executed from the terminal using ~/.bashrc_history file.
There is a catch here, It logs only those commands which were executed interactively from bash shell/terminal.
This solves one of my problems. But in addition to it, I would like to log those commands also which were executed as a part of the shell script.
Note: I don't have control over shell script. Therefore, adding verbose mode like #!/bin/bash -xe is not possible.
However, this can be assumed that I have root access as a system administrator.
Eg: I have another user that has access to the system. And he runs the following shell script using from his account.
#!/bin/sh
nmap google.com
and run as "$ sh script.sh"
Now, What I want is "nmap google.com" command should be logged somewhere once this file is executed.
Thanks in advance. Even a small help is appreciated.
Edit: I would like to clarify that users are unaware that they are being monitored. So I need a solution something at system level(may be agent running with root). I cannot depend on user to log suspicious activity. Of-course everyone will avoid such tricks to put blame on someone else if they do something fishy or wrong
I am aware that you were asking for Bash and Shell scripting and tagged your question accordingly, but in respect to your requirements
Record every single command that is executed by any user on the system
Users are unaware that they are being monitored
A solution something at system level
I am under the assumption that you are looking for Audit Logging.
So you may take advantage from articles like
Log all commands run by Admins on production servers
Log every command executed by a User
You can run the script in this way:
execute bash (it will override the shebang)
ts to prefix every lines
logs both in terminal and files
bash -x script.sh |& ts | tee -a /tmp/$(date +%F).log
You may ask the other user to create an alias.
Edit:
You may also add this into /etc/profile (sourced when users login)
exec > >(tee -a /tmp/$(date +%F).log)
Do it also for error output if needed. Keep it splited.
Quick question regarding this error in Bash. I have some bash scripts that I run at work on some servers. We have been trying to implement Rundeck as a way to automate. When I call these scripts through Rundeck I get the error bash: no job control in this shell. Now I am new to linux and shell scripting but what I have figured is this is due to interactivity of the shell. The scripts I am calling will call other scripts (perl and shell) as part of their operation. But since I am not logged on in a terminal, they can't do this and fail. That is the best understanding I have.
I have tried adding a -l to my #!/bin/bash in my script I send through Rundeck. I have also tried to utilize the ad-hoc command option through Rundeck and run the jobs individually. Still not luck. I thought perhaps it was a pathing issue so I tired setting the path PATH=$PATH but no change.
Basically I need to allow these scripts to open their subprocesses. So the question is, how I modify how I call these scripts so they have full control? Is that possible? Sorry if this lacks info, I just don't know how properly put it out there. let me know what other info is needed. Thanks
Edit: Some code snips on where the message comes in:
system "pntadm -R $Net >/dev/null 2>&1";
system "pntadm -C $Net $BUILD{$Net}{netmask} $MYip";
The problem...
I use trick77's IP blacklist script to configure the firewall of my apache server and am able to run his script in terminal.
However, when assigning the bash script in ipset-blacklist to crontab, it will not run no matter what I do.
Code written in crontab file for root:
#daily /var/bash/update-blacklist.sh
What I think is the culprit...
Since I haven't done this sort of thing before, I believe that the PATH of the bash script isn't set correctly... but again, I'm not sure.
I have seen others using a line such as PATH=/usr/bin:/bin:/usr/sbin:/sbin to resolve problems involving the script's location, but, I don't exactly know what this does.
I set the location of the bash file to /var/bash instead of /usr/bin and I believe that this is throwing things off.
Pardon my lack of understanding. I really am a beginner when it comes to bash.
Any help is greatly appreciated.
What I have done...
Per #EtanReisner:
Added echo here >> /tmp/update-blacklist.out to top of update-blacklist.sh and set cron to run it every minute (* * * * *).
The file was successfully created.
Added type -p curl grep egrep ipset >> /tmp/update-blacklist.out to top of update-blacklist.sh and returned:
-p: not found
curl is /usr/bin/curl
grep is /bin/grep
egrep is /bin/egrep
ipset: not found
The output from type ipset indicated that ipset was not in the cron script PATH which isn't surprising.
The default PATH for cron jobs is fairly limited.
With ipset located in /usr/sbin that is the path that must be added to the cron script's PATH variable.
You talked about this in your question
I have seen others using a line such as PATH=/usr/bin:/bin:/usr/sbin:/sbin to resolve problems involving the script's location, but, I don't exactly know what this does.
What that does is set the PATH variable to those paths (from whatever the default value was).
The PATH variable contains the paths where the shell looks for binaries/scripts/etc. to run when you try to run them as commands.
I have a script used for zipping a database and site files, then dumps the output into a backup folder on the server. The script runs fine from the command line, but it will not work through cron.
After much research, I am thinking that cron cannot run it in its current form because it runs in a different environment.
Here is the script, saved as file_name.sh
#!/bin/bash
NOW=$(date +"%Y-%m-%d-%H%M")
FILE="website.com.$NOW.tar"
BACKUP_DIR="/backupfolder"
WWW_DIR="/var/www/website/"
DB_USER="dbuser"
DB_PASS="dbpw"
DB_NAME="dbname"
DB_FILE="website.com.$NOW.sql"
WWW_TRANSFORM='s,^var/www/website,www,'
DB_TRANSFORM='s,^backupfolder,database,'
tar -cvf $BACKUP_DIR/$FILE --transform $WWW_TRANSFORM $WWW_DIR
mysqldump -u$DB_USER -p$DB_PASS $DB_NAME > $BACKUP_DIR/$DB_FILE
tar --append --file=$BACKUP_DIR/$FILE --transform $DB_TRANSFORM $BACKUP_DIR/$DB_FILE
rm $BACKUP_DIR/$DB_FILE
gzip -9 $BACKUP_DIR/$FILE
I currently have the script stored in /usr/local/scripts/
Is there something wrong with the above code that does not allow it to run through cron?
Which crontab should it go in? crontab -e from terminal, or /etc/crontab? They are two different files.
Several things come to mind: first, one of the most common problems with cron jobs is that generally crond runs things with a very minimal PATH (usually just /usr/bin:/bin), so if the script uses any commands from some other binaries directory, it'll fail. Where is mysqldump on your system (run which mysqldump if you aren't sure)? If this is the problem, adding PATH=/usr/local/bin:/usr/bin:/bin (or whatever's appropriate in your case) at the beginning of your script should fix it. Alternately, you can set PATH in the crontab file (put this line before the entry that runs your script).
If that's not the problem, my next step would be to capture the script's output, with something like:
1 1 * * * /usr/local/scripts/file_name.sh >/tmp/file_name.log 2>&1
... and see if the output is informative. BTW, as #tripleee mentioned, the format of your cron entry is suitable for the files crontab -e edits, but not for /etc/crontab. The /etc version has an additional field specifying which user to run the job as, e.g.
1 1 * * * eric /usr/local/scripts/file_name.sh >/tmp/file_name.log 2>&1
Best practice is to always use crontab -e (the resultant files are usually in /var/spool/cron/) and this works on every unix and linux platform I ever worked on.
Other common issues with cron execution are missing environment variables. Any environment variables set in .bash_profile (or .profile if you use korn shell) will not necessarily be present in the cron environment. This can be overcome by including them in your script.
As Gordon said, paths are another suspect. You can always full path you executables in your script (eg /bin/mysqldump). Some of the more cynical of us do this anyway to make sure we are executing what we intended as apposed to some other file of the same name in the current path.
I can only guess at your specific problem since you fixed it by creating /scripts, that perhaps the permissions on /usr/local/scripts directory did not allow execution by the cron user?
I have had to remove the extension (.sh) for cron to run in some instances.
So I fixed it. Not sure what the problem was, but this worked for me.
I originally had the scripts located in /usr/local/scripts/
I created a new directory here - /scripts/ and moved the scripts there. The new crontab -e command looked like this:
1 1 * * * bash /scripts/file_name.sh
Works perfectly. Again, I am not sure what the issue was before, but it works now.
I am trying to get the number of open files periodically through crontab using lsof|wc -l.
It always returns zero. It is giving correct result when i run it directly.Any idea about this strange behaviour?Is it related to pipe size as the result can be quite large?Thanks a lot.
Kaka
The main difference is the environment variables.
In this case it might be the PATH. lsof is often found in or /usr/sbin , that might be in your PATH when you run it interactivly, while not in the PATH when run from cron.
try /usr/sbin/lsof|wc -l in your cron script. And check the local mail, as cron output is normally sent there, there might be relevant error messages.
Is it related to pipe size as the result can be quite large?
No.