Run script on two machines - linux

I have a shell script that I need to automate with cron. At our office, there is a specific machine that I must log in to in order to use cron. My problem is, that the script I have written interacts with git, using git commands to pull code and switch branches. The machine where I am able to schedule cron jobs and the script is being run from does not have git on it. I have a separate machine that I log in to when I am using git. Is there an easy way for me to run my script from the cron system and run the git part from the git system?
UPDATE: I am still interested if this can be done, but my team has acquired a new machine that we will set up however we choose, meaning that it will have cron and git. Thanks for any ideas

As some people have mentioned above, ssh is the way to do this. This is a bash line that I use a lot in my job, for gathering data from other servers:
ssh -T $server -l username "/export/home/path/to/script.sh $1 $2" 1>traf1.txt 2>/dev/null
The above code sample will connect to the ip $server, as user username and run the script script.sh, passing it the parameters $1 and $2. Instead of redirection you could also assign the command output to a variable, just as you would with any other command in your script.
PS: Please note that in order for the above to work, you will need to set up passwordless login between those machines. Otherwise your script will break to request password input, which is most probably not the desired behavior.

Related

Automatic Synchronization with rsync

I am developing a script that will automatically keep a folder on two workstations synchronized using rsync. So far, I have gotten everything to work but I have one small thing I haven't figured out. Once, the rsync command is executed, it prompts for the password of the other workstation. However, I haven't been able to find a way to automatically enter that password once prompted in the terminal. I tried using the expect command, but that didn't work as the command didn't execute until after I enter the password.
Is there a solution to this?
Here is my script. I have two instances of the same VM, hence the same usernames
#!/bin/bash
LOCAL="/home/rams/Documents/"
RSYNC_OPTIONS="-avh --progress /home/rams/Documents/ rams#192.168.1.39:/home/rams/Downloads/"
PASSWORD="rams2020"
while true
do
inotifywait -e modify $LOCAL
rsync $RSYNC_OPTIONS
done
Solution shown here:
https://stackoverflow.com/a/3299970/8860865
TL;DR - Use an SSH keygen to generate a keyfile that will authenticate upon using the rsync command and the password is prompted. https://www.thegeekstuff.com/2008/11/3-steps-to-perform-ssh-login-without-password-using-ssh-keygen-ssh-copy-id/

How to log every single command executed from shell script

I am trying to find a way to record every single command that is executed by any user on the system.
Things that I have came across earlier.
It is possible to view shell commands executed from the terminal using ~/.bashrc_history file.
There is a catch here, It logs only those commands which were executed interactively from bash shell/terminal.
This solves one of my problems. But in addition to it, I would like to log those commands also which were executed as a part of the shell script.
Note: I don't have control over shell script. Therefore, adding verbose mode like #!/bin/bash -xe is not possible.
However, this can be assumed that I have root access as a system administrator.
Eg: I have another user that has access to the system. And he runs the following shell script using from his account.
#!/bin/sh
nmap google.com
and run as "$ sh script.sh"
Now, What I want is "nmap google.com" command should be logged somewhere once this file is executed.
Thanks in advance. Even a small help is appreciated.
Edit: I would like to clarify that users are unaware that they are being monitored. So I need a solution something at system level(may be agent running with root). I cannot depend on user to log suspicious activity. Of-course everyone will avoid such tricks to put blame on someone else if they do something fishy or wrong
I am aware that you were asking for Bash and Shell scripting and tagged your question accordingly, but in respect to your requirements
Record every single command that is executed by any user on the system
Users are unaware that they are being monitored
A solution something at system level
I am under the assumption that you are looking for Audit Logging.
So you may take advantage from articles like
Log all commands run by Admins on production servers
Log every command executed by a User
You can run the script in this way:
execute bash (it will override the shebang)
ts to prefix every lines
logs both in terminal and files
bash -x script.sh |& ts | tee -a /tmp/$(date +%F).log
You may ask the other user to create an alias.
Edit:
You may also add this into /etc/profile (sourced when users login)
exec > >(tee -a /tmp/$(date +%F).log)
Do it also for error output if needed. Keep it splited.

no job control in this shell

Quick question regarding this error in Bash. I have some bash scripts that I run at work on some servers. We have been trying to implement Rundeck as a way to automate. When I call these scripts through Rundeck I get the error bash: no job control in this shell. Now I am new to linux and shell scripting but what I have figured is this is due to interactivity of the shell. The scripts I am calling will call other scripts (perl and shell) as part of their operation. But since I am not logged on in a terminal, they can't do this and fail. That is the best understanding I have.
I have tried adding a -l to my #!/bin/bash in my script I send through Rundeck. I have also tried to utilize the ad-hoc command option through Rundeck and run the jobs individually. Still not luck. I thought perhaps it was a pathing issue so I tired setting the path PATH=$PATH but no change.
Basically I need to allow these scripts to open their subprocesses. So the question is, how I modify how I call these scripts so they have full control? Is that possible? Sorry if this lacks info, I just don't know how properly put it out there. let me know what other info is needed. Thanks
Edit: Some code snips on where the message comes in:
system "pntadm -R $Net >/dev/null 2>&1";
system "pntadm -C $Net $BUILD{$Net}{netmask} $MYip";

Run bash script upon ssh

I have a host on which I created a script .
The script is being executed whenever the user is logging in via ssh bashrc launches the script.
Now I'm trying to get the script to execute even if the user is not actually logging in , and just running a command .
For example I want the script to be executed if a user is running the following :
ssh user#host.com some_command
Is there a way to achieve the above?
A solution affecting all the users could be using pam-exec and launch a script on the user login event. Check the pam-exec manual page and an example on how to use it pam-exec scripting.
A simple solution for a single user should be add the script in the rc file of the ssh user, add your script to:
~/.ssh/rc
I've done some tests and the rc solution works fine in your case, it gets executed when the user launches a remote command via ssh.
If you don't have a rc file just create it.
you can edit authorized_keys file and add a COMMAND , something like :
command="/home/michale/bin/dothis.sh" ...public key...
for more details read ssh and authorized_keys documentations.

Webapp update shell script

I feel silly asking this...
I am not an expert on shell scripting, but I am finally in enough of a sysadmin role that I want to do this correctly.
I have a production server that hosts a webapp. Here is my routine.
1 - ssh to server
2 - cd django_src/django_apps/team_proj
3 - svn update
4 - sudo /etc/init.d/apache2 restart
5 - logout
I want to create a shell script for steps 2,3,4.
I can do this, but it will be a very plain and simple bash script simply containing the actual commands I type at the command line.
My question: What is the best way to script this kind of repetitive procedure in bash (Linux, Ubuntu) for a remote server?
Thanks!
The best way is simply as you suggest. Some things you should do for your script would be:
put set -e at the top of the script (after the shebang). This will cause your script to stop if any of the commands fail. So if it cannot cd to the directory, it will not run svn update or restart apache. You can do this programmatically by putting || exit 0 after each command, but if that's all you're doing, you may as well use set -e
Use full paths in your script. Do not assume the directory that the script is run from. In this specific case, the cd command has a relative path. Use a full (absolute) path, or use an environment variable like $HOME.
You may want to set up sudo so that it can run the command without asking for a password. This makes your script non-interactive which means it can be run in the background and from cron jobs and such.
As time goes by, you may add features and take command line arguments to parameterise the script. But don't bother doing this up front. Just evolve your scripts as you need.
There is nothing wrong with a simple bash script simply containing the actual commands you type at the command line. Don't make it more complicated than necessary.
I'd setup a cron job doing that automatically.
Since you're using python, check out fabric - you can use it to automate these kind of tasks. First install fabric:
$ sudo easy_install fabric
then write your fabric script:
from __future__ import with_statement
from fabric.api import *
def svnupdate():
with cd('django_src/django_apps/team_proj'):
run('svn update')
sudo('/etc/init.d/apache2 restart')
Save as fabfile.py, then run using the fab command:
$ fab -H hostname svnupdate
Tell me that's not cool! :-)
you can do this with the shell (bash,ksh,zsh + ssh + tools), or programming languages such as Python,Perl(Ruby or PHP or Java) etc, basically a language that supports SSH protocol and operating system functions. The "best" one is the one that you are more comfortable and have knowledge in. If you are doing sysadmin, the shell is the closest thing you can use. Then after you have done your script, you can use the crontab (cron) , or the at command to schedule your task. check their man page for more information
You can easily do the above using bash/Bourne etc.
However I would take the time and effort to learn Perl (or some similarly powerful scripting language). Why ?
the language constructs are much more powerful
there are no end of libraries to interface to the systems/features you want to script
because of the library support, you won't have to spawn off different commands to achieve what you want (possibly valuable on a loaded system)
you can decompose frequently-used scripts into your own libraries for later use
I choose Perl particularly because it's been designed (perhaps designed is too strong a word for Perl) for these sort of tasks. However you may want to check out Ruby/Python or other suggestions from SO contributers.
For the basic steps look at camh's answer. If you plan to run the script via cron, then implement some simple logging, e.g. by appending start time of each command with exit code to a textfile which you can later analyze for failures of the script.
Expect -- scripting interactive applications
Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc.... Expect can make easy all sorts of tasks that are prohibitively difficult with anything else. You will find that Expect is an absolutely invaluable tool - using it, you will be able to automate tasks that you've never even thought of before - and you'll be able to do this automation quickly and easily.
http://expect.nist.gov
bonus: Your tax dollars at work!
I would probably do something like this...
project_update.sh
#!/bin/bash
#
# $1 - user#host
# $2 - project directory
[[ -z $1 || -z $2 ]] && { echo "usage: $(basename $0) user#host project_dir"; exit 1; }
declare host=$1 proj_dir=$2
ssh $host "cd $proj_dir;svn update;sudo /etc/init.d/apache2 restart" && echo "Success"
Just to add another tip - you should not give users access to some application in an unknown state. svn up might break during the update, users might see a page that's half-new half-old, etc. If you're deploying the whole application at once, I'd suggest doing svn export instead to a new directory and then either mv current old ; mv new current, or even keeping current as a link to the directory you're using now. Still not perfect and not blocking every possible race condition, but it definitely takes less time than svn up on the live copy.

Resources