Running commands on remote server transparently - linux

I am developing a number of applications that need a bit more power than my local machine has. So I'd like to run them on a remote machine. This is all fairly straightforward and runs something like: 1) rsync the files in the current directory to the remote machine in some location, 2) ssh to remote machine run the command. In some cases, if the remote command generates a file, I'd need to pull that back locally also.
It feels to me like such a common set of tasks that should be a nice command that puts it all together. Say something like
## Run make on the the files in the current directory on big-server-box
rrun big-server-box make
## Do the same, but pull output.txt back afterward
rrun big-server-box -f output.txt make
## Open a shell, having synced files first
rrun big-server-box --shell
Is there any tool that achieves this?

There is already a mechanism for accomplishing what you’re looking for: ssh.
If you want your specific syntax, you could very easily write a wrapper script. The only things different from your syntax above is the -f flag to copy files back to your local machine (which would be easy to implement with scp) and the --shell flag, which can be omitted since that behavior is implied with ssh. Other than that, the syntax is identical: call ssh instead of rrun and you have what you’re looking for.

Related

copy file from one server to another in linux

how to run commands like ftp or sftp or scp in background? Also how to setup password less connection for running these command?
Look for manual pages for scp or rsync, which both can do this job well, if not being forced you don't want to use sftp or even the non encrypted ftp file transfer!
something like the following, for example:
rsync [some other parameters] -e ssh SOURCE TARGET
Assuming these commands are coming from a bash script , you would need to make sure that the two (or more ) systems have ssh certificates generated that allow you to access said systems without providing a "password" per se.
Briefly, you could do it by running this command on one system:
ssh-keygen
following through, this will generate a key. Then run:
ssh-copy-id user#some-remote-system
to copy it to the remote system, which will allow passwordless access, enabling scripts to go about their business without stalling for password prompts.

Run script on two machines

I have a shell script that I need to automate with cron. At our office, there is a specific machine that I must log in to in order to use cron. My problem is, that the script I have written interacts with git, using git commands to pull code and switch branches. The machine where I am able to schedule cron jobs and the script is being run from does not have git on it. I have a separate machine that I log in to when I am using git. Is there an easy way for me to run my script from the cron system and run the git part from the git system?
UPDATE: I am still interested if this can be done, but my team has acquired a new machine that we will set up however we choose, meaning that it will have cron and git. Thanks for any ideas
As some people have mentioned above, ssh is the way to do this. This is a bash line that I use a lot in my job, for gathering data from other servers:
ssh -T $server -l username "/export/home/path/to/script.sh $1 $2" 1>traf1.txt 2>/dev/null
The above code sample will connect to the ip $server, as user username and run the script script.sh, passing it the parameters $1 and $2. Instead of redirection you could also assign the command output to a variable, just as you would with any other command in your script.
PS: Please note that in order for the above to work, you will need to set up passwordless login between those machines. Otherwise your script will break to request password input, which is most probably not the desired behavior.

What's a .sh file?

So I am not experienced in dealing with a plethora of file types, and I haven't been able to find much info on exactly what .sh files are. Here's what I'm trying to do:
I'm trying to download map data sets which are arranged in tiles that can be downloaded individually: http://daymet.ornl.gov/gridded
In order to download a range of tiles at once, they say to download their script, which eventually leads to daymet-nc-retrieval.sh: https://github.com/daymet/scripts/blob/master/Bash/daymet-nc-retrieval.sh
So, what exactly am I supposed to do with this code? The website doesn't provide further instructions, assuming users know what to do with it. I'm guessing you're supposed to paste the code in to some other unmentioned application for a browser (using Chrome or Firefox in this case)? It almost looks like something that could be pasted in to Firefox/Greasemonkey, but not quite. Just by a quick Google on the file type I haven't been able to get heads or tails on it.
I'm sure there's a simple explanation on what to do with these files out there, but it seems to be buried in plenty of posts where people are already assuming you know what to do with these files. Anyone willing to just simply say what needs to be done from square one after getting to the page with the code to actually implementing it? Thanks.
What is a file with extension .sh?
It is a Bourne shell script. They are used in many variations of UNIX-like operating systems. They have no "language" and are interpreted by your shell (interpreter of terminal commands) or if the first line is in the form
#!/path/to/interpreter
they will use that particular interpreter. Your file has the first line:
#!/bin/bash
and that means that it uses Bourne Again Shell, so called bash. It is for all practical purposes a replacement for good old sh.
Depending upon the interpreter you will have different languages in which the file is written.
Keep in mind, that in UNIX world, it is not the extension of the file that determines what the file is (see "How to execute a shell script" below).
If you come from the world of DOS/Windows, you will be familiar with files that have .bat or .cmd extensions (batch files). They are not similar in content, but are akin in design.
How to execute a shell script
Unlike some unsafe operating systems, *nix does not rely exclusively on extensions to determine what to do with a file. Permissions are also used. This means that if you attempt to run the shell script after downloading it, it will be the same as trying to "run" any text file. The ".sh" extension is there only for your convenience to recognize that file.
You will need to make the file executable. Let's assume that you have downloaded your file as file.sh, you can then run in your terminal:
chmod +x file.sh
chmod is a command for changing file's permissions, +x sets execute permissions (in this case for everybody) and finally you have your file name.
You can also do it in your GUI. Most of the time you can right click on the file and select properties; in XUbuntu the permissions options look like this:
If you do not wish to change the permissions, you can also force the shell to run the command. In the terminal you can run:
bash file.sh
The shell should be the same as in the first line of your script.
How safe is it?
You may find it weird that you must perform another task manually in order to execute a file. But this is partially because of a strong need for security.
Basically when you download and run a bash script, it is the same thing as somebody telling you "run all these commands in sequence on your computer, I promise that the results will be good and safe". Ask yourself if you trust the party that has supplied this file, ask yourself if you are sure that you have downloaded the file from the same place as you thought, maybe even have a glance inside to see if something looks out of place (although that requires that you know something about *nix commands and bash programming).
Unfortunately apart from the warning above I cannot give a step-by-step description of what you should do to prevent evil things from happening with your computer; so just keep in mind that any time you get and run an executable file from someone you're actually saying, "Sure, you can use my computer to do something".
If you open your second link in a browser you'll see the source code:
#!/bin/bash
# Script to download individual .nc files from the ORNL
# Daymet server at: http://daymet.ornl.gov
[...]
# For ranges use {start..end}
# for individul vaules, use: 1 2 3 4
for year in {2002..2003}
do
for tile in {1159..1160}
do wget --limit-rate=3m http://daymet.ornl.gov/thredds/fileServer/allcf/${year}/${tile}_${year}/vp.nc -O ${tile}_${year}_vp.nc
# An example using curl instead of wget
#do curl --limit-rate 3M -o ${tile}_${year}_vp.nc http://daymet.ornl.gov/thredds/fileServer/allcf/${year}/${tile}_${year}/vp.nc
done
done
So it's a bash script. Got Linux?
In any case, the script is nothing but a series of HTTP retrievals. Both wget and curl are available for most operating systems and almost all language have HTTP libraries so it's fairly trivial to rewrite in any other technology. There're also some Windows ports of bash itself (git includes one). Last but not least, Windows 10 now has native support for Linux binaries.
sh files are unix (linux) shell executables files, they are the equivalent (but much more powerful) of bat files on windows.
So you need to run it from a linux console, just typing its name the same you do with bat files on windows.
Typically a .sh file is a shell script which you can execute in a terminal. Specifically, the script you mentioned is a bash script, which you can see if you open the file and look in the first line of the file, which is called the shebang or magic line.
I know this is an old question and I probably won't help, but many Linux distributions(e.g., ubuntu) have a "Live cd/usb" function, so if you really need to run this script, you could try booting your computer into Linux. Just burn a .iso to a flash drive (here's how http://goo.gl/U1wLYA), start your computer with the drive plugged in, and press the F key for boot menu. If you choose "...USB...", you will boot into the OS you just put on the drive.
How do I run .sh scripts?
Give execute permission to your script:
chmod +x /path/to/yourscript.sh
And to run your script:
/path/to/yourscript.sh
Since . refers to the current directory: if yourscript.sh is in the current directory, you can simplify this to:
./yourscript.sh
or with GUI
https://askubuntu.com/questions/38661/how-do-i-run-sh-scripts/38666#38666
https://www.cyberciti.biz/faq/run-execute-sh-shell-script/
open the location in terminal then type these commands
1. chmod +x filename.sh
2. ./filename.sh
that's it

Using vim to remotely edit a file on serverB only accessible from serverA

Although I have never tried this, it is apparently possible to remotely edit a file in vim as described here. In my particular case the server I need access to can only be accessed from on campus, hence I have to log into my university account like so:
ssh user#login.university.com
then from there log into the secure server like so:
ssh user#secure.university.com
I have keyless ssh set up, so I can automate the process like so:
ssh user#login.university.com -t "ssh user#secure.university.com"
is there anyway to remotely edit a file such as secure.university.com/user/foo.txt on my local machine?
EDIT:
My intention is to use vim on my local machine as it is impractical (move .vim folder, copy .vimrc) and in some cases impossible (recompile vim with certain settings, patch vim source, install language beautifiers) to make vim on the remote machine behave the way I want it to behave. What I want is to issue something like this (this is not accurate scp, I know)
vim scp://user#login.university.com scp://user#secure.university.com//home/user/foo.txt
OK after a little working around I figured it out. First you have to edit (or create) your .ssh/config file as described here. For our purposes, we will add a line like this, which essentially adds a proxy.
Host secure
User Julius
HostName secure.university.com
ProxyCommand ssh Tiberius#login.university.com nc %h %p 2> /dev/null
Then we can simply copy (via scp) the file secure.university.com:/home/Julius/fee/fie/fo/fum.txt to the local computer like so
scp secure:/home/Julius/fee/fie/fo/fum.txt fum.txt
Extending on this, we can load it into vim remotely like so:
vim scp://secure//home/Julius/fee/fie/fo/fum.txt
or using badd like so:
:badd scp://secure//home/Julius/fee/fie/fo/fum.txt
To simplify my life, I added this shortcut to my .vimrc file for the most commonly used subfolder:
nnoremap <leader>scp :badd scp://secure//home/Julius/fee/fie/fo/fum.txt
So far vim has proven to be pretty aware that this is a remote file, so if the C file includes a file like so:
#include "foo.h"
it won't complain that "foo.h" is missing
Once you SSHed in the machine you can run any command(also vim) in remote host on your shell. After logging run vim as you are running in your machine.
Since you are using ssh, you basically have access to the server via the CLI, as if you were sitting in front of the machine itself. With that said, you can use any program on that machine, just like you would use it on your own machine. Assuming that the secure.university.com/user/foo.txt means that there is a text file called foo.txt at location /user on the secure server, then the following commands would work after logging in through ssh:
cd /user
vim foo.txt
You could also use nano or any other CLI based editor that is installed on the machine.

Webapp update shell script

I feel silly asking this...
I am not an expert on shell scripting, but I am finally in enough of a sysadmin role that I want to do this correctly.
I have a production server that hosts a webapp. Here is my routine.
1 - ssh to server
2 - cd django_src/django_apps/team_proj
3 - svn update
4 - sudo /etc/init.d/apache2 restart
5 - logout
I want to create a shell script for steps 2,3,4.
I can do this, but it will be a very plain and simple bash script simply containing the actual commands I type at the command line.
My question: What is the best way to script this kind of repetitive procedure in bash (Linux, Ubuntu) for a remote server?
Thanks!
The best way is simply as you suggest. Some things you should do for your script would be:
put set -e at the top of the script (after the shebang). This will cause your script to stop if any of the commands fail. So if it cannot cd to the directory, it will not run svn update or restart apache. You can do this programmatically by putting || exit 0 after each command, but if that's all you're doing, you may as well use set -e
Use full paths in your script. Do not assume the directory that the script is run from. In this specific case, the cd command has a relative path. Use a full (absolute) path, or use an environment variable like $HOME.
You may want to set up sudo so that it can run the command without asking for a password. This makes your script non-interactive which means it can be run in the background and from cron jobs and such.
As time goes by, you may add features and take command line arguments to parameterise the script. But don't bother doing this up front. Just evolve your scripts as you need.
There is nothing wrong with a simple bash script simply containing the actual commands you type at the command line. Don't make it more complicated than necessary.
I'd setup a cron job doing that automatically.
Since you're using python, check out fabric - you can use it to automate these kind of tasks. First install fabric:
$ sudo easy_install fabric
then write your fabric script:
from __future__ import with_statement
from fabric.api import *
def svnupdate():
with cd('django_src/django_apps/team_proj'):
run('svn update')
sudo('/etc/init.d/apache2 restart')
Save as fabfile.py, then run using the fab command:
$ fab -H hostname svnupdate
Tell me that's not cool! :-)
you can do this with the shell (bash,ksh,zsh + ssh + tools), or programming languages such as Python,Perl(Ruby or PHP or Java) etc, basically a language that supports SSH protocol and operating system functions. The "best" one is the one that you are more comfortable and have knowledge in. If you are doing sysadmin, the shell is the closest thing you can use. Then after you have done your script, you can use the crontab (cron) , or the at command to schedule your task. check their man page for more information
You can easily do the above using bash/Bourne etc.
However I would take the time and effort to learn Perl (or some similarly powerful scripting language). Why ?
the language constructs are much more powerful
there are no end of libraries to interface to the systems/features you want to script
because of the library support, you won't have to spawn off different commands to achieve what you want (possibly valuable on a loaded system)
you can decompose frequently-used scripts into your own libraries for later use
I choose Perl particularly because it's been designed (perhaps designed is too strong a word for Perl) for these sort of tasks. However you may want to check out Ruby/Python or other suggestions from SO contributers.
For the basic steps look at camh's answer. If you plan to run the script via cron, then implement some simple logging, e.g. by appending start time of each command with exit code to a textfile which you can later analyze for failures of the script.
Expect -- scripting interactive applications
Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc.... Expect can make easy all sorts of tasks that are prohibitively difficult with anything else. You will find that Expect is an absolutely invaluable tool - using it, you will be able to automate tasks that you've never even thought of before - and you'll be able to do this automation quickly and easily.
http://expect.nist.gov
bonus: Your tax dollars at work!
I would probably do something like this...
project_update.sh
#!/bin/bash
#
# $1 - user#host
# $2 - project directory
[[ -z $1 || -z $2 ]] && { echo "usage: $(basename $0) user#host project_dir"; exit 1; }
declare host=$1 proj_dir=$2
ssh $host "cd $proj_dir;svn update;sudo /etc/init.d/apache2 restart" && echo "Success"
Just to add another tip - you should not give users access to some application in an unknown state. svn up might break during the update, users might see a page that's half-new half-old, etc. If you're deploying the whole application at once, I'd suggest doing svn export instead to a new directory and then either mv current old ; mv new current, or even keeping current as a link to the directory you're using now. Still not perfect and not blocking every possible race condition, but it definitely takes less time than svn up on the live copy.

Resources