Quick question regarding this error in Bash. I have some bash scripts that I run at work on some servers. We have been trying to implement Rundeck as a way to automate. When I call these scripts through Rundeck I get the error bash: no job control in this shell. Now I am new to linux and shell scripting but what I have figured is this is due to interactivity of the shell. The scripts I am calling will call other scripts (perl and shell) as part of their operation. But since I am not logged on in a terminal, they can't do this and fail. That is the best understanding I have.
I have tried adding a -l to my #!/bin/bash in my script I send through Rundeck. I have also tried to utilize the ad-hoc command option through Rundeck and run the jobs individually. Still not luck. I thought perhaps it was a pathing issue so I tired setting the path PATH=$PATH but no change.
Basically I need to allow these scripts to open their subprocesses. So the question is, how I modify how I call these scripts so they have full control? Is that possible? Sorry if this lacks info, I just don't know how properly put it out there. let me know what other info is needed. Thanks
Edit: Some code snips on where the message comes in:
system "pntadm -R $Net >/dev/null 2>&1";
system "pntadm -C $Net $BUILD{$Net}{netmask} $MYip";
Related
I am trying to find a way to record every single command that is executed by any user on the system.
Things that I have came across earlier.
It is possible to view shell commands executed from the terminal using ~/.bashrc_history file.
There is a catch here, It logs only those commands which were executed interactively from bash shell/terminal.
This solves one of my problems. But in addition to it, I would like to log those commands also which were executed as a part of the shell script.
Note: I don't have control over shell script. Therefore, adding verbose mode like #!/bin/bash -xe is not possible.
However, this can be assumed that I have root access as a system administrator.
Eg: I have another user that has access to the system. And he runs the following shell script using from his account.
#!/bin/sh
nmap google.com
and run as "$ sh script.sh"
Now, What I want is "nmap google.com" command should be logged somewhere once this file is executed.
Thanks in advance. Even a small help is appreciated.
Edit: I would like to clarify that users are unaware that they are being monitored. So I need a solution something at system level(may be agent running with root). I cannot depend on user to log suspicious activity. Of-course everyone will avoid such tricks to put blame on someone else if they do something fishy or wrong
I am aware that you were asking for Bash and Shell scripting and tagged your question accordingly, but in respect to your requirements
Record every single command that is executed by any user on the system
Users are unaware that they are being monitored
A solution something at system level
I am under the assumption that you are looking for Audit Logging.
So you may take advantage from articles like
Log all commands run by Admins on production servers
Log every command executed by a User
You can run the script in this way:
execute bash (it will override the shebang)
ts to prefix every lines
logs both in terminal and files
bash -x script.sh |& ts | tee -a /tmp/$(date +%F).log
You may ask the other user to create an alias.
Edit:
You may also add this into /etc/profile (sourced when users login)
exec > >(tee -a /tmp/$(date +%F).log)
Do it also for error output if needed. Keep it splited.
I am wondering when and why do we need execution permission in linux although we can run any script without execute permission when we execute that script using the syntax bellow?
bash SomeScriptFile
Not all programs are scripts — bash for example isn't. So you need execute permission for executable programs.
Also, when you say bash SomeScriptFile, the script has to be in the current directory. If you have the script executable and in a directory on your PATH (e.g. $HOME/bin), then you can run the script without the unnecessary circumlocution of bash $HOME/bin/SomeScriptFile (or bash ~/bin/SomeScriptFile); you can simply run SomeScriptFile. This economy is worth having.
Execute permission on a directory is somewhat different, of course, but also important. It permits the 'class of user' (owner, group, others) to access files in the directory, subject to per-file permissions also allowing that.
Executing the script by invoking it directly and running the script through bash are two very different things.
When you run bash ~/bin/SomeScriptFile you are really just executing bash -- a command interpreter. bash in turns load the scripts and runs it.
When you run ~/bin/SomeSCriptFile directly, the system is able to tell this file is a script file and finds the interpreter to run it. There is a big of magic invoking the #! on the first line to look for the right interpreter.
The reason we run scripts directly is that the user (and system) couldn't know or care of the command we are running is a script or a compiled executable.
For instance, if I write a nifty shell script called fixAllIlls and later I decide to re-write it in C, as long a I keep the same interface, the users don't have to do anything different.
To them, it is just a program to run.
edit
The operating system checks permissions first for several reasons:
Checking permissions is faster
In the days of old, you could have SUID scripts, so one needed to check the permission bits.
As a result, it was possible to run scripts that you could not actually read the contents of. (That is still true of binaries.)
When I try to run a Perl script which is called via Linux script manually it works fine but not executable via CRON.
Linux_scrip.sh conatains Perl_script and the command is,
perl_path/perl Perl_script.pl
I got perl_path using which perl command.
Can anyone suggest why is it not executable via CRON entry.
Most likely suspects:
Current work directory isn't as expected.
Permission issues[1].
Environment variables aren't setup as expected.
Requirement of a terminal can't be met.
See the crontab tag wiki for more pitfalls and some debugging tips.
The first thing you should do is to read the error message.
This isn't likely to be an issue for you own cron job, but I've included it since it's quite a common problem for scripts run from other services.
Most probable cause is current working directory.
Before perl command execution, write a command to change directory.
Something like :
cd perl_path;
perl Perl_script.pl
I'm struggling trying to debug a cron job which isn't working correctly. The cron job calls a shell script which should unrar a rar file - this works correctly when i run the script manually, but for some reason it's not working via cron. I am using the absolute file path and have verified that the path is correct. Has anyone got any ideas why this could be happening?
Well, you already said that you have used absolute paths, so the number one problem is dealt with.
Next to check are permissions. Which user is the cron job run as? Does it have all the permissions necessary?
Then, a little trick: if you have a shell script that fails and it's not run in a terminal I like to redirect the output of it to some files. Right at the start of the script, add:
exec &>/tmp/my.log
This will redirect STDOUT and STDERR to /tmp/my.log. Then it might also be a good idea to also add the line:
set -x
This will make bash print which command it's about to execute, and at what nesting level.
Happy debugging!
The first thing to check when cron jobs fail is to see if the full environment is available to the script you are trying to execute. In other words, you need to realize that a job executed via cron runs as a detached process meaning it is not associated with a login environment. Therefore whenever you try to debug a cron job that works when you execute manually, you need to be sure the same environment is available to the cronjob as is available to you when you execute it manually. This include any PATH settings, and other envvars that the script may depend on.
For me, the problem was a different shell interpreter in crontab.
I feel silly asking this...
I am not an expert on shell scripting, but I am finally in enough of a sysadmin role that I want to do this correctly.
I have a production server that hosts a webapp. Here is my routine.
1 - ssh to server
2 - cd django_src/django_apps/team_proj
3 - svn update
4 - sudo /etc/init.d/apache2 restart
5 - logout
I want to create a shell script for steps 2,3,4.
I can do this, but it will be a very plain and simple bash script simply containing the actual commands I type at the command line.
My question: What is the best way to script this kind of repetitive procedure in bash (Linux, Ubuntu) for a remote server?
Thanks!
The best way is simply as you suggest. Some things you should do for your script would be:
put set -e at the top of the script (after the shebang). This will cause your script to stop if any of the commands fail. So if it cannot cd to the directory, it will not run svn update or restart apache. You can do this programmatically by putting || exit 0 after each command, but if that's all you're doing, you may as well use set -e
Use full paths in your script. Do not assume the directory that the script is run from. In this specific case, the cd command has a relative path. Use a full (absolute) path, or use an environment variable like $HOME.
You may want to set up sudo so that it can run the command without asking for a password. This makes your script non-interactive which means it can be run in the background and from cron jobs and such.
As time goes by, you may add features and take command line arguments to parameterise the script. But don't bother doing this up front. Just evolve your scripts as you need.
There is nothing wrong with a simple bash script simply containing the actual commands you type at the command line. Don't make it more complicated than necessary.
I'd setup a cron job doing that automatically.
Since you're using python, check out fabric - you can use it to automate these kind of tasks. First install fabric:
$ sudo easy_install fabric
then write your fabric script:
from __future__ import with_statement
from fabric.api import *
def svnupdate():
with cd('django_src/django_apps/team_proj'):
run('svn update')
sudo('/etc/init.d/apache2 restart')
Save as fabfile.py, then run using the fab command:
$ fab -H hostname svnupdate
Tell me that's not cool! :-)
you can do this with the shell (bash,ksh,zsh + ssh + tools), or programming languages such as Python,Perl(Ruby or PHP or Java) etc, basically a language that supports SSH protocol and operating system functions. The "best" one is the one that you are more comfortable and have knowledge in. If you are doing sysadmin, the shell is the closest thing you can use. Then after you have done your script, you can use the crontab (cron) , or the at command to schedule your task. check their man page for more information
You can easily do the above using bash/Bourne etc.
However I would take the time and effort to learn Perl (or some similarly powerful scripting language). Why ?
the language constructs are much more powerful
there are no end of libraries to interface to the systems/features you want to script
because of the library support, you won't have to spawn off different commands to achieve what you want (possibly valuable on a loaded system)
you can decompose frequently-used scripts into your own libraries for later use
I choose Perl particularly because it's been designed (perhaps designed is too strong a word for Perl) for these sort of tasks. However you may want to check out Ruby/Python or other suggestions from SO contributers.
For the basic steps look at camh's answer. If you plan to run the script via cron, then implement some simple logging, e.g. by appending start time of each command with exit code to a textfile which you can later analyze for failures of the script.
Expect -- scripting interactive applications
Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc.... Expect can make easy all sorts of tasks that are prohibitively difficult with anything else. You will find that Expect is an absolutely invaluable tool - using it, you will be able to automate tasks that you've never even thought of before - and you'll be able to do this automation quickly and easily.
http://expect.nist.gov
bonus: Your tax dollars at work!
I would probably do something like this...
project_update.sh
#!/bin/bash
#
# $1 - user#host
# $2 - project directory
[[ -z $1 || -z $2 ]] && { echo "usage: $(basename $0) user#host project_dir"; exit 1; }
declare host=$1 proj_dir=$2
ssh $host "cd $proj_dir;svn update;sudo /etc/init.d/apache2 restart" && echo "Success"
Just to add another tip - you should not give users access to some application in an unknown state. svn up might break during the update, users might see a page that's half-new half-old, etc. If you're deploying the whole application at once, I'd suggest doing svn export instead to a new directory and then either mv current old ; mv new current, or even keeping current as a link to the directory you're using now. Still not perfect and not blocking every possible race condition, but it definitely takes less time than svn up on the live copy.