Writing to multiple remote terminals over SSH - linux

Let's say I am SSH'ing to multiple remote machines. I'd like to send commands to all the machines through one interface. I was thinking of opening a named pipe whose output would be piped to each machine I SSH into. That way if I echo "ls -l" > namedpipe, then the command would run on each machine. I tried this, but it didn't work. Any suggestions on how can I have one terminal from which I could interact with all the remote machines?

GNU Parallel is the way to go. There are lots of examples on SO and elsewhere, also using named pipes when needed. Other tools are mentioned and compared in the parallel manpage.
As to your example, what you want can be done as simply as
parallel ssh {} "ls -l" ::: user1#machine1 user2#machine2 ...
Some linux distributions come with a configuration file (usually /etc/parallel/config ) where the option --tollef is set by default. If this is your case and you don’t want to change, you must use -- instead of ::: in the first example above, or, alternatively, use the --gnu option to override --tollef.
Equivalently, if you create a file called remotelist containing
user1#machine1
user2#machine2
you can issue:
parallel -a remotelist ssh {} "ls -l"
or, as noted by a comment below,
parallel --nonall --slf remotelist ls -l
the --slf option (short for --sshloginfile) allows stuffing more information in the remotelist file: comments, number of processors to use on each remote host, and the like.
You might also consider the --tag option, which prepends each output line with the name of the host it originates from.

There are plenty of tools available which could help you do that. Some of them it's worth checking at are:
pssh
Ansible Ad-Hoc module
They both work over SSH.
If you still want to use a custom script with a SSH client, you could do something like:
while read i
do
ssh user#$i <command to execute> <arg>
done < server.list

Related

Appending to file with sudo access

I am trying to append line to an existing file owned by root and I have to do this task with about 100 servers. So I created servers.txt with all the IPs and the ntp.txt file which will have the lines that I need to append. I am executing the following script and I am not achieving what I am trying to. Can someone please suggest what needs to be corrected?
!/bin/bash
servers=`cat servers.txt`;
for i in $servers;
do
cat ntp.txt | ssh root#${i} sudo sh -c "cat >>ntp.conf""
done
Here are some issues; not sure I found all of them.
The shebang line lacks the # which is significant and crucial.
There is no need to read the server names into a variable, and in addition to wasting memory, you are exposing yourself to a number of potential problems; see https://mywiki.wooledge.org/DontReadLinesWithFor
Unless you specifically require the shell to do whitespace tokenization and wildcard expansion on a value, put it in double quotes (or even single quotes, but this inhibits variable expansion, which you still want).
If you are logging in as root, no need to explicitly sudo.
ssh runs a shell for you; no need to explicitly sh -c your commands.
On the other hand, you want to avoid running a root shell if you can. A common way to get to append to a file without having to spawn a shell just to be able to redirect is to use tee -a (just drop the -a to overwrite instead of append). Also printing the file to standard output is an undesired effect (some would say the main effect rather than side effect of tee but let's just not go there) so you often redirect to /dev/null to avoid having the text also spill onto your screen.
You probably want to avoid a useless use of cat if only just to avoid having somebody point out to you that it's useless.
#!/bin/bash
while read -r server; do
do
ssh you#"$server" sudo tee -a /etc/ntp.conf <ntp.txt >/dev/null
done <servers.txt
I changed the code to log in as you but it's of course something you will need to adapt to suit your environment. (If you log in as yourself, you usually just ssh server without explicitly specifying a user name.)
As per your comment, I also added a full path to the destination file /etc/ntp.conf
A more disciplined approach to server configuration is to use something like CFengine2 to manage configurations.

Bash: Unexpected parallel behavior when reading arguments from file using xargs

Previous
This is a follow-up to this question.
Specs
My system is a dedicated server running Ubuntu Desktop, Release 12.04 (precise) 64-bit, 3.14.32-xxxx-std-ipv6-64. Neither release or kernel can be upgraded, but I can install any package.
Problem
The problem discribed in the question above seems to be solved, however this doesn't work for me. I've installed the latest lftp and parallel packages and they seem to work fine for themselves.
Running lftp works fine.
Running ./job.sh ftp.microsoft.com works fine, but I needed to chmod -x the script
Running sed 's/|.*$//' end_unique.txt | xargs parallel -j20 ./job.sh ::: does not work and produces bash errors in the form of /bin/bash: <server>: command not found.
To simplify things, I cleaned the input file end_unique.txt, now it has the following format for each line:
<server>
Each line ends in a CRLF, because it is imported from a windows server.
Edit 1:
This is the job.sh script:
#/bin/sh
server="$1"
lftp -e "find .; exit" "$server" >"$server-files.txt"
Edit 2:
I took the file and ran it against fromdos. Now it should be standard unix format, one server per line. Keep in mind that the server in the file can vary in format:
ftp.server.com
www.server.com
server.com
123.456.789.190
etc. All of those servers are ftp servers, accessible by ftp://<serverfromfile>/.
With :::, parallel expects the list of arguments it needs to complete the commands it's going to run to appear on the command line, as in
parallel -j20 ./job.sh ::: server1 server2 server3
Without ::: it reads the arguments from stdin, which serves us better in this case. You can simply say
parallel -j20 ./job.sh < end_unique.txt
Addendum: Things that can go wrong
Make certain two things:
That you are using GNU parallel and not another version (such as the one from moreutils), because only (as far as I'm aware) the GNU version supports reading an argument list from stdin, and
That GNU parallel is not configured to disable the GNU extensions. It turned out, after a lengthy discussion in the comments, that they are disabled by default on Ubuntu 12.04, so it is not inconceivable that this sort of thing might be found elsewhere (particularly downstream from Ubuntu). Such a configuration can hide in
The environment variable $PARALLEL,
/etc/parallel/config, or
~/.parallel/config
If the GNU version of parallel is not available to you, and if your argument list is not too long for the shell and none of the arguments in it contain whitespaces, the same thing with the moreutils parallel is
parallel -j20 job.sh -- $(cat end_unique.txt)
This did not work for OP because the file contained more servers than the shell was willing to put into a command line, but it might work for others with similar problems.

How can I issue parallel commands to remote nodes with different arguments?

I need to execute an application in parallel on multiple Ubuntu-Linux servers while supplying different arguments for different servers. I tried to google it, but could not get to the possible solution. I even experimented with ssh/pdsh/parallel, but without success.
To explain the scenario further, here is a non-working example (with pdsh) where script.sh should be executed on all 3 servers in parallel but with different arguments. FYI, I already have public/private ssh-key (password-free login) in place.
$ pdsh -w server1,server2,server3 -l username script.sh args
where args should be 1 for server1, 2 for server2 etc.
I would appreciate if someone can help me achieve this, either using pdsh or some other tool available in Ubuntu. Thanks for your help.
Regards
Sachin
I've done similar things using cssh in the past, but I don't see why pdsh wouldn't work with the same approach (although I've never used it).
My assumption is that you have an interactive session running simultaneously on all systems and you are able to create new environment variables in each session. If pdsh doesn't allow this, cssh does.
Before starting your script, set an environment variable based on the hostname.
ARG=$( case $(hostname) in server1) echo 1;; server2) echo 2;; server3) echo 3;; esac )
script.sh $ARG
Assuming the number you want is encoded in your hostname (as suggested in your question), you can simplify it like this:
script.sh ${HOSTNAME#server}
You don't need any special tools - ssh and bash are sufficient. If you want to add more servers, all you need to do is add them to the arrays near the top. Be careful with the spaces in the sample code below. Use multiple argument arrays if you have multiple arguments. An ssh config file allows per host user names.
`#!/bin/bash
servers=( server1 server2 server3 )
args=( args1 args2 args3 )
count=${#servers[#]}
k=0
while ((k<count))
do
ssh -l username ${servers[k]} 'script ${args[k]}' &
((k++))
done`

Using Linux commands on files across multiple servers

I am new to Linux as a whole and so far I have not found a solution to this that isnt clumsy at best. I have a Windows background and so I am accustomed to running commands on one server that access text files on multiple systems in the same domain.
Example of what is processed in Windows:
find "Some text" \\ServerName01\c$\inetpub\*.log
find "Some text" \\ServerName02\c$\inetpub\*.log
find "Some text" \\ServerName03\c$\inetpub\*.log
Example of what I would LIKE to do in Linux:
sed 's/SomeText/OtherText/p //ServerName01/var/opt/somefolder/*.log
sed 's/SomeText/OtherText/p //ServerName02/var/opt/somefolder/*.log
sed 's/SomeText/OtherText/p //ServerName03/var/opt/somefolder/*.log
What is the best way to do the above in Linux, or is it even possible?
Thanks!
See the pssh and pscp suite, you can run commands on a bunch of remote servers : http://www.theether.org/pssh/
pssh or cssh would work
pssh provides a number of commands for executing against a group of
computers, using SSH. It’s most useful for operating on clusters of
homogenously-configured hosts.
http://www.ubuntugeek.com/execute-commands-simultaneously-on-multiple-servers-using-psshcluster-sshmultixterm.html
there is a lot of way for doing it :
Via NFS/Fuse Mount, mount the logs directory on one system and you could do the same thing as windows (which automatically mount remote filesystem with the "\\")
use ssh,(that would be my prefered solution)
cat serverlist | xargs -i ssh {} " grep \"some text\" yourfilepaths"
which helps if you use ssh keys pairs

iLO3: Multiple SSH commands

is there a way to run multiple commands in HPs integrated Lights-Out 3 system via SSH? I can login to iLO and run a command line by line, but I need to create a small shell-script, to connect to iLO and to run some commands one by one.
This is the line I use, to get information about the iLO-version:
/usr/bin/ssh -i dsa_key administrator#<iLO-IP> "version"
Now, how can I do something like this?
/usr/bin/ssh -i dsa_key administrator#<iLO-IP> "version" "show /map1 license" "start /system1"
This doesn't work, because iLO thinks it's all one command. But I need something to login into iLO, run these commands and then exit from iLO. It takes too much time to run them one after the other because every login into iLO-SSH takes ~5-6 seconds (5 commands = 5*5 seconds...).
I've also tried to seperate the commands directly in iLO after manual login but there is no way to use multiple commands in one line. Seems like one command is finished by pressing return.
iLO-SSH Version is: SM-CLP Version 1.0
The following solutions did NOT work:
/usr/bin/ssh -i dsa_key administrator#<iLO-IP> "version; show /map1 license; start /system1"
/usr/bin/ssh -i dsa_key administrator#<iLO-IP> "version && show /map1 license && start /system1"
This Python module is for HP iLO Management. check it out
http://pypi.python.org/pypi/python-hpilo/
Try putting your commands in a file (named theFile in this example):
version
show /map1 license
start /system1
Then:
ssh -i dsa_key administrator#iLO-IP < theFile
Semicolons and such won't work because you're using the iLO shell on the other side, not a normal *nix shell. So above I redirect the file, with newlines intact, as if you were typing all that into the session by hand. I hope it works.
You are trying to treat iLO like it's a normal shell, but its really HP's dopy interface.
That being said, the easiest way is to put all the commands in a file and then pipe it to ssh (sending all of the newline characters):
echo -e "version\nshow /map1 license\nstart /system1" | /usr/bin/ssh -i dsa_key administrator#<iLO-IP>
That's a messy workaround, but would you might fancy using expect? Your script in expect would look something like that:
# Make an ssh connection
spawn ssh -i dsa_key administrator#<iLO-IP>
# Wait for command prompt to appear
expect "$"
# Send your first command
send "version\r"
# Wait for command prompt to appear
expect "$"
# Send your second command
send "show /map1 license\r"
# Etc...
On the bright side, it's guaranteed to work. On the darker side, it's a pretty clumsy workaround, very prone to breaking if something goes not the way it should (for example, command prompt character would appear in version output, or something like that).
I'm on the same case and wish to avoid to run a lot of plink commands. So I've seen you can add a file with the -m option but apparently it executes just one command at time :-(
plink -ssh Administrator#AddressIP -pw password -m test.txt
What's the purpose of the file ? Is there a special format for this file ?
My current text file looks like below:
set /map1/oemhp_dircfg1 oemhp_usercntxt1=CN=TEST
set /map1/oemhp_dircfg1 oemhp_usercntxt2=CN=TEST2
...
Is there a solution to execute these two commands ?
I had similar issues and ended up using the "RIBCL over HTTPS" interface to the iLO. This has advantages in that it is much more responsive than logging in/out over ssh.
Using curl or another command-line HTTP client try:
USERNAME=<YOUR_ILO_USERNAME>
PASSWORD=<YOUR_ILO_PASSWORD>
ILO_URL=https://<YOUR_ILO_IP>/ribcl
curl -k -X POST -d "<RIBCL VERSION=\"2.0\">
<LOGIN USER_LOGIN=\"${USERNAME}\" PASSWORD=\"${PASSWORD}\">
<RIB_INFO MODE="READ">
<GET_FW_VERSION/>
<GET_ALL_LICENSES/>
</RIB_INFO>
<SERVER_INFO MODE=\"write\">
<SET_HOST_POWER HOST_POWER=\"Yes\">
</SERVER_INFO>
</LOGIN>
</RIBCL>" ${ILO_URL}
The formatting isn't exactly the same, but if you have the ability to access the iLO via HTTPS instead of only ssh, this may give you some flexibility.
More details on the various RIBCL commands and options may be found at HP iLO 3 Scripting Guide (PDF).

Resources