Can PSFTP execute loops? - linux

I've searched a lot on the internet but haven't been able to find any useful info this yet. Does PFTP not allow you to run loops like 'IF' and 'WHILE' at all?
If it does, please let me know the syntax, I'm tried of banging my head against it. Annoyingly, PuTTY allows these commands but psftp doesn't seem to even though both are from the same family. I really hope there is a solution to this!

PSFTP isn't a language. It's just an SFTP client. SFTP itself is just a protocol for moving files between computers. If you have SFTP set up on the remote computer then it suggests that you have SSH running (since SFTP generally comes bundled with the SSH server install).
You can do a test in a bash shell script, for instance, to see if the file exists on the remote server, then execute your psftp command based on the result. Something like:
#!/bin/bash
# test if file exists on remote system
fileExists=$(ssh user#yourothercomputer "test -f /tmp/foo && echo 'true' || echo 'false'")
if $fileExists; then
psftp <whatever>
fi
You can stick that whole mess in a loop or whatevs. What's happening here is that we are sending a command test -f /tmp/foo && echo 'true' || echo 'false' to the remote computer to execute. The stdout of the command is returned and stored in the variable fileExists. Then we just test it.
If you are in windows you could convert this to a batch script and use plink.exe to send the command kind of like they do here. Or maybe just plop cygwin on your computer with an SSH and SFTP client and use what's above.
The big take-away here is that you will need a separate scripting environment to do the loop and run psftp based on a test.

Related

How have both local and remote variable inside an SSH command locally available

This is a variation of the question here: How have both local and remote variable inside an SSH command
How do I get the "ssh-variable" B to my local computer?
A=3
ssh host#name "echo $A; B=5; echo $A; echo \$B;"
echo $A
echo $B
echo \$B
returns 3, NOTHING, $B
How can I locally access a value for B?
some explanations based on comments:
I am using a bash script to access a remote host to modify some things there, amongst other things create a workspace there. The location of this workspace is what I want to store in a variable and save for later use.
Basically, I have a function to go to the remote host and make the workspace and then another function to use the path to that workspace to do other things there.
I was hoping for a lightweight, slim solution that can be integrated and easily read in a command similar to this:
ssh host#name "ws_allocate myworkspace; workspace=ws_find myworkspace;"
and then locally use $workspace. This is part of a larger bash script, that should be easy to understand for non-expert bash users (like myself)...
Turns out this is a duplicate of bash—Better way to store variable between runs?
You could do something like this:
X=$(ssh user#host 'echo $X')
This will run echo $X on the server and place the resulting output in a local variable X, which you can later use by saying $X
So to apply this to your example, you could say:
workspace=$(ssh host#name 'ws_allocate myworkspace; ws_find myworkspace;')
Note that the local variable workspace will contain the output from the command that is run remotely. So I'm assuming that in your example only the ws_find myworkspace will print any output and that the call to ws_allocate is silent.
Ok, so the solution that appears to me to be the easiest and that allows the value of $B to be stored on the remote host is to save it in a file. As described and discussed here already: bash—Better way to store variable between runs? :(
ssh ${supercomputer} "
ws_list -a >workspaces.txt;
if grep -q "${experiment}" workspaces.txt;
then echo 'we have a workspace already';
else ws_allocate ${experiment} 30;
fi;
ws_find ${experiment} > mypathtoworkspace;"
where supercomputer is the address to the host, experiment the workspace I want to create (or not, if it is there already) and workspace the workspace I want to create for my experiment.
I wanted to use the path to the remote workspace locally, e.g.
echo "${workspace}"
but this seems to not be possible - therefore I will download the file I am writing on the remote host and read the path from that file to use it locally
If anyone sees a better solution it is welcome!

Provide commands automatically to ftp in bash script

I am trying to create a bash script with ftp.
If I use terminal and put the commands below one bye one, it works like a charm.
$ ftp 192.168.1.4 2121
Connected to 192.168.1.4.
220 SwiFTP 1.7.11 ready
Name (192.168.1.4:user):
331 Send password
Password:
230 Access granted
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> cd Study/Math
ftp> put ~/Documents/Math/lesson.pdf lesson.pdf
I am trying to automate this command with a bash script:
#!/bin/bash
ftp 192.168.1.4 2121
cd Study/Math
put ~/Documents/Math/lesson.pdf lesson.pdf
It is not working. I know here ftp is a independent tool and I have to put those command while the ftp program is running. I searched internet tried various techniques (like printf, expect, etc...) but it did not worked. I also tried to use some scripts from internet to automate this process, but nothing helped. I am a newbie in bash scripting and these stuffs. Can you guys help me to solve this Problem?
Thanks in advance...
First, don't do this if you have any other option. It's a fairly standard idiom, but it's really broken in a lot of ways. If you're very sure that it won't ever do anything unpredictable, and that when it does it will be ok anyway, then sure, but in general...
use something besides ftp. For example, scp works quite well and has a checkable return code that is actually useful.
use a more granular programming language with modules. Don't get me wrong, I love bash and will always use it first when I can, but pumping a stream of commands into an ftp like a fire-and-forget UDP torpedo without any easy way of checking that each worked is just bad habit. Try Perl, or Python, or any other damned thing that lets you check a return code on each command and react accordingly. :)
if you MUST use bash (and yes, I have done it), and if it's important enough to check (what isn't?), then think about how you're going to do that. Maybe you can just pull lesson.pdf back to a local testme.pdf and cmp them to make sure it's good, which seems pretty easy. For any more complex script, you might need to run the ftp as a coprocesses and feed it commands one at a time, then read back it's output and parse for return codes, because ftp generally only reports errors there...and watch out for "500 bytes sent", which isn't a 500 error.
Either way, good luck. In many ways, simple is still best.
You might be interested in what is known as a heredoc
#!/usr/bin/env bash
ftp -n 92.168.1.4 2121 <<EOF
cd Study/Math
put ~/Documents/Math/lesson.pdf lesson.pdf
EOF
Since the ftp program reads its input from /dev/stdin, you can use a heredoc to define what should be passed from /dev/stdin
Okay, after User123 gave me a link, I finally solved my problem. So, I am giving my working script.
#!/bin/sh
HOST='192.168.1.4 2121'
USER='username_of_ftp'
PASSWD='password_of_ftp'
/usr/bin/ftp -n $HOST <<END_SCRIPT
user ${USER} ${PASSWD}
cd Study/Math
put ~/Documents/Math/lesson.pdf lesson.pdf
quit
END_SCRIPT
exit 0
While the ftp approach you're working with might be OK for your purposes, it's not going to behave properly if the server doesn't respond as expected.
As has already been suggested, use something other than ftp if at all possible. Scp or rsync using key authentication will work better, be more convenient since it won't break every time you change your password, and be more secure.

shell script can't see files in remote directory

I'm trying to write an interactive script on a remote server, whose default shell is zsh. I've been trying two different approaches to get this to work:
Approach 1: ssh -t <user>#<host> "$(<serverStatusReport.sh)"
Approach 2: ssh <user>#<host> "bash -s" < serverStatusReport.sh
I've been using approach 1 just fine up until now, when I ran into the following issue - I have a block of code that runs depending on whether certain files exist in the current directory:
filename="./service_log.*"
if ls $filename 1> /dev/null 2>&1 ; then
echo "$filename found."
##process files
else
echo "$filename not found."
fi
If I ssh into the server and run the command directly, I see "$filename found."
If I run the block of code above using Approach 1, I see "$filename not found".
If I copy this block into a new script (lets call this script2), and run it using Approach 2, then I see "$filename found".
I can't for the life of me figure out where this discrepancy is coming from. I thought that the difference may be that script2 is piped into bash whereas my original script is being run with zsh... but considering that running the same command verbatim on the server, with its default zsh shell, returns correctly... I'm stumped.
:( any help would be greatly appreciated!
I guess that when executing your approach 1 it is the local shell that expands "$(<serverStatusReport.sh)", not the remote. You can easily check this with:
ssh -t <user>#<host> "$(<hostname)"
Is the serverStatusReport.sh script also in the PATH on the local host?
What I do not understand is why you get this message instead of an error message.

shell script, for loop, ssh and alias

I'm trying to do something like this, I need to take backup from 4 blades, and
all should be stored under the /home/backup/esa location, which contains 4
directories with the name of the nodes (like sc-1, sc-2, pl-1, pl-2). Each
directory should contain respective node's backup information.
But I see that "from which node I execute the command, only that data is being
copied to all 4 directories". any idea why this happens? My script is like this:
for node in $(grep "^node" /cluster/etc/cluster.conf | awk '{print $4}');
do echo "Creating backup fornode ${node}";
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
done
Your problem is this piece of the code:
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
It does:
Create a remote shell on $node
Execute the command source /etc/profile.d/bkUp.sh in the remote shell
Close the remote shell and forget about anything done in that shell!!
Run asBackup on the local host.
This is not what you want. Change it to:
ssh "$node" "source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}'"
This does:
Create a remote shell on $node
Execute the command(s) source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}' on the remote host
Make sure that /home/backup/esa/${node} is a NFS mount (otherwise, the files will only be backed up in a directory on the remote host).
Note that /etc/profile is a very bad place for backup scripts (or their config). Consider moving the setup/config to /home/backup/esa which is (or should be) shared between all nodes of the cluster, so changing it in one place updates it everywhere at once.
Also note the usage of quotes: The single and double quotes make sure that spaces in the variable node won't cause unexpected problems. Sure, it's very unlikely that there will be spaces in "$node" but if there are, the error message will mislead you.
So always quote properly.
The formatting of your question is a bit confusing, but it looks as if you have a quoting problem. If you do
ssh $node source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}
then the command source is executed on $node. After the command finishes, the remote connection is closed and with it, the shell that contains the result of sourcing /etc/profile.d/bkUp.sh. Now esaBackup command is run on the local machine. It won't see anything that you keep in `bkUp.sh
What you need to do is put quotes around all the commands you want the remote shell to run -- something like
ssh $node "source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}"
That will make ssh run the full list of commands on the remote node.

Webapp update shell script

I feel silly asking this...
I am not an expert on shell scripting, but I am finally in enough of a sysadmin role that I want to do this correctly.
I have a production server that hosts a webapp. Here is my routine.
1 - ssh to server
2 - cd django_src/django_apps/team_proj
3 - svn update
4 - sudo /etc/init.d/apache2 restart
5 - logout
I want to create a shell script for steps 2,3,4.
I can do this, but it will be a very plain and simple bash script simply containing the actual commands I type at the command line.
My question: What is the best way to script this kind of repetitive procedure in bash (Linux, Ubuntu) for a remote server?
Thanks!
The best way is simply as you suggest. Some things you should do for your script would be:
put set -e at the top of the script (after the shebang). This will cause your script to stop if any of the commands fail. So if it cannot cd to the directory, it will not run svn update or restart apache. You can do this programmatically by putting || exit 0 after each command, but if that's all you're doing, you may as well use set -e
Use full paths in your script. Do not assume the directory that the script is run from. In this specific case, the cd command has a relative path. Use a full (absolute) path, or use an environment variable like $HOME.
You may want to set up sudo so that it can run the command without asking for a password. This makes your script non-interactive which means it can be run in the background and from cron jobs and such.
As time goes by, you may add features and take command line arguments to parameterise the script. But don't bother doing this up front. Just evolve your scripts as you need.
There is nothing wrong with a simple bash script simply containing the actual commands you type at the command line. Don't make it more complicated than necessary.
I'd setup a cron job doing that automatically.
Since you're using python, check out fabric - you can use it to automate these kind of tasks. First install fabric:
$ sudo easy_install fabric
then write your fabric script:
from __future__ import with_statement
from fabric.api import *
def svnupdate():
with cd('django_src/django_apps/team_proj'):
run('svn update')
sudo('/etc/init.d/apache2 restart')
Save as fabfile.py, then run using the fab command:
$ fab -H hostname svnupdate
Tell me that's not cool! :-)
you can do this with the shell (bash,ksh,zsh + ssh + tools), or programming languages such as Python,Perl(Ruby or PHP or Java) etc, basically a language that supports SSH protocol and operating system functions. The "best" one is the one that you are more comfortable and have knowledge in. If you are doing sysadmin, the shell is the closest thing you can use. Then after you have done your script, you can use the crontab (cron) , or the at command to schedule your task. check their man page for more information
You can easily do the above using bash/Bourne etc.
However I would take the time and effort to learn Perl (or some similarly powerful scripting language). Why ?
the language constructs are much more powerful
there are no end of libraries to interface to the systems/features you want to script
because of the library support, you won't have to spawn off different commands to achieve what you want (possibly valuable on a loaded system)
you can decompose frequently-used scripts into your own libraries for later use
I choose Perl particularly because it's been designed (perhaps designed is too strong a word for Perl) for these sort of tasks. However you may want to check out Ruby/Python or other suggestions from SO contributers.
For the basic steps look at camh's answer. If you plan to run the script via cron, then implement some simple logging, e.g. by appending start time of each command with exit code to a textfile which you can later analyze for failures of the script.
Expect -- scripting interactive applications
Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc.... Expect can make easy all sorts of tasks that are prohibitively difficult with anything else. You will find that Expect is an absolutely invaluable tool - using it, you will be able to automate tasks that you've never even thought of before - and you'll be able to do this automation quickly and easily.
http://expect.nist.gov
bonus: Your tax dollars at work!
I would probably do something like this...
project_update.sh
#!/bin/bash
#
# $1 - user#host
# $2 - project directory
[[ -z $1 || -z $2 ]] && { echo "usage: $(basename $0) user#host project_dir"; exit 1; }
declare host=$1 proj_dir=$2
ssh $host "cd $proj_dir;svn update;sudo /etc/init.d/apache2 restart" && echo "Success"
Just to add another tip - you should not give users access to some application in an unknown state. svn up might break during the update, users might see a page that's half-new half-old, etc. If you're deploying the whole application at once, I'd suggest doing svn export instead to a new directory and then either mv current old ; mv new current, or even keeping current as a link to the directory you're using now. Still not perfect and not blocking every possible race condition, but it definitely takes less time than svn up on the live copy.

Resources