Adding new user to multiple unix servers using terminal - linux

I am working within a company and require myself to be added onto different branch servers. The current way of doing this is:
sudo /usr/local/bin/sd-adduser test "Test User"
This needs to be done individually logging into each server manually - which is about 20 servers. I vaguely know of expect which allows you to do add a user to multiple servers? Could anyone point me in the right direction? Or provide me the script to do this.
Any help is appreciated.

Sounds like multi-ssh could help you or pssh or pdsh.
In the long run you probably want a central user management like LDAP.

Routine administration tasks such as this can be done using a script that reads a list of server names and runs a command. Something like this "each-host" script:
#!/bin/sh
for server in $(cat mylist)
do
ssh -t $server "$#"
done
where mylist is a file containing the list of servers.
Thus
each-host sudo /usr/local/bin/sd-adduser test "Test User"
would run the OP's command on each host. Once you get that working, you could tidy up a little, making it less verbose (not printing /etc/motd);
#!/bin/sh
for server in $(cat mylist)
do
echo "** $server"
ssh -q -t $server "$#"
done

Related

How to create one logfile when ssh-ing to multiple servers

I'd like to create a bash script to automatically connect myself to a bunch of servers, execute some commands there and save the output of these commands in one logfile on the server I use to connect myself to all the other servers.
So far I was able to create a logfile on each of the servers I'm connecting myself to or to display the output of each of the commands on the console of the server I use to get to all the other servers.
My script currently looks like this (I know about for loops, but I don't want to use them in this case because I need to execute different commands on each server):
#!/bin/bash
ssh server1 <<EOF
hostname
printf '\n'
mount
EOF
printf '\n'
printf '\n'
printf '\n'
ssh server2 <<EOF
hostname
printf '\n'
mount
EOF
...
My idea was to use the &>> operator, because I need to know if all commands where executed successfully or not. In the end I'd like to have only one logfile which should look somewhat like this:
server1
output of mount
server 2
output of mount
...
So, how can I manage to create only one large logfile that contains the results of all executed commands? Also, will this script still work correctly if I make use of the ssh -T option to get rid of the message "Pseudo-terminal will not be allocated because stdin is not a terminal."? And do I have to escape special characters like / _ - when using mount in my script to mount something?
Thanks in advance!
I suggest using Open source utilities like logstash or fluentd.
I would use fabric, which is a tool to interact with several servers using ssh. It provides operations for executing remote shell commands.
For your example, the fabfile:
from fabric.api import run, sudo
def my-task():
run('hostname')
run('mount')
An you can execute it:
fab -H server1,server2 my-task
Output will be via standard output of the server you are executing so you can easily redirect it to a file:
fab -H server1,server2 my-task | my-task.log

FTP status check using a variable - Linux

I am doing an ftp and I want to check the status. I don't want to use '$?' as mostly it returns 0 (Success) for ftp even though internally ftp didn't go through.
I know I can check the log file and do a grep from there for "Transfer complete" (221 status). That works fine but I don't want to do it as I have many different reports doing ftp. So creating multiple log files for all of them is what I want to avoid.
Can I get the logged information in a local script variable and process it inside the script itself?
Something similar to these (I've tried both but neither worked):
Grab FTP output in BASH SCRIPT
FTP status check whether successful or not
Below is something similar to what I am trying to do:
ftp -inv ${HOST} > log_file.log <<!
user ${USER} ${PASS}
bin
cd "${TARGET}"
put ${FEEDFILE}
bye
!
Any suggestions on how can I get the entire ftp output in a script variable and then check it within the script?
To capture stdout to a variable you can use bash's command substitution, so either OUTPUT=`cmd` or OUTPUT=$(cmd).
Here's an example how to capture the output from ftp in your case:
CMDS="user ${USER} ${PASS}
bin
cd \"${TARGET}\"
put \"${FEEDFILE}\"
bye"
OUTPUT=$(echo "${CMDS}" | ftp -inv ${HOST})

how to send different commands to multiple hosts to run programs in Linux

I am an R user. I always run programs on multiple computers of campus. For example, I need to run 10 different programs. I need to open PuTTY 10 times to log into the 10 different computers. And submit each of programs to each of 10 computers (their OS is Linux). Is there a way to log in 10 different computers and send them command at same time? I use following command to submit program
nohup Rscript L_1_cc.R > L_1_sh.txt
nohup Rscript L_2_cc.R > L_2_sh.txt
nohup Rscript L_3_cc.R > L_3_sh.txt
First set up ssh so that you can login without entering a password (google for that if you don't know how). Then write a script to ssh to each remote host to run the command. Below is an example.
#!/bin/bash
host_list="host1 host2 host3 host4 host5 host6 host7 host8 host9 host10"
for h in $host_list
do
case $h in
host1)
ssh $h nohup Rscript L_1_cc.R > L_1_sh.txt
;;
host2)
ssh $h nohup Rscript L_2_cc.R > L_2_sh.txt
;;
esac
done
This is a very simplistic example. You can do much better than this (for example, you can put the ".R" and the ".txt" file names into a variable and use that rather than explicitly listing every option in the case).
Based on your topic tags I am assuming you are using ssh to log into the remote machines. Hopefully the machine you are using is *nix based so you can use the following script. If you are on Windows consider cygwin.
First, read this article to set up public key authentication on each remote target: http://www.cyberciti.biz/tips/ssh-public-key-based-authentication-how-to.html
This will prevent ssh from prompting you to input a password each time you log into every target. You can then script the command execution on each target with something like the following:
#!/bin/bash
#kill script if we throw an error code during execution
set -e
#define hosts
hosts=( 127.0.0.1 127.0.0.1 127.0.0.1)
#define associated user names for each host
users=( joe bob steve )
#counter to track iteration for correct user name
j=0
#iterate through each host and ssh into each with user#host combo
for i in ${hosts[*]}
do
#modify ssh command string as necessary to get your script to execute properly
#you could even add commands to transfer the file into which you seem to be dumping your results
ssh ${users[$j]}#$i 'nohup Rscript L_1_cc.R > L_1_sh.txt'
let "j=j+1"
done
#exit no error
exit 0
If you set up the public key authentication, you should just have to execute your script to make every remote host do their thing. You could even look into loading the users/hosts data from file to avoid having to hard code that information into the arrays.

How to connect to multiple servers to run the same query?

I have 4 servers where we have log files in same pattern. For every serch/query I need to login to all servers one by one and execute the command.
Is it possible to provide some command, so that it will login to all those servers one by one automatically and will fetch the output from each server?
What configuration, settings etc I have to do to make it working.
I am new to Linux Domain.
As suggested in your question comments, there are a number of tools to help you in performing a task on multiple machines. I will add to this list and suggest Ansible. It is designed to perform all of the interactions over ssh, in quite a simple manner, and with very little configuration.
https://github.com/ansible/ansible
If you were to have server-1 and server-2 defined in your ~/.ssh/config file, then the ansible configuration would be as simple as
[myservers]
server-1
server-2
Then to run a command on the group
$ ansible myservers -a uptime
If your servers are called eenie, meanie, minie, and moe, you simply do
for server in eenie meanie minie moe; do
ssh "$server" grep 'intrusion attempt' /var/log/firewall.log
done
The grep command won't reveal from which server it is reporting a result; maybe replace it with ssh "$server" sed -n "/intrusion attempt/s/^/$server: /p" /var/log/firewall.log
Use https://sealion.com. You just have to execute one script and it will install the agent in your servers and start collecting output. It has a convenient web interface to see the output across all your servers.

How can I scp a file and run an ssh command asking for password only once?

Here's the context of the question:
In order for me to be able to print documents at work, I have to copy the file over to a different computer and then print from that computer. (Don't ask. It's complicated and there is not another viable solution.) Both of the computers are Linux and I work in bash. The way I currently do this is I scp the file over to the print computer and then ssh in and print from command line.
Here's what I would like to do:
In order to make my life a bit easier, I'd like to combine these two step into one. I could easily write a function that did both these steps, but I would have to provide my password twice. Is there any way to combine the steps so that I only provide my password once?
Before somebody suggests it, key-based ssh-logins are not an option. It has been specifically disabled by the Administrators for security reasons.
Solution:
What I ended up doing was a modification of the second solution Wrikken provided. Simply wrapping up his first suggestion in a function would have gotten the job done, but I liked the idea of being able to print multiple documents without having to type my password once per document. I have a rather long password and I'm a lazy typist :)
So, what I did was take a sequence of commands and wrap them up in a python script. I used python because I wanted to parameterize the script, and I find it easiest to do in python. I cheated and just ran bash commands from python through os.system. Python just handled parameterization and flow control. The logic was as follows:
if socket does not exist:
run bash command to create socket with timeout
copy file using the created socket
ssh command to print using socket
In addition to using a timeout, I also put have an option in my python script to manually close the socket should I wish to do so.
If anyone wants the code, just let me know and I'll either paste-bin it or put it on my git repo.
ssh user#host 'cat - > /tmp/file.ext; do_something_with /tmp/file.ext;rm /tmp/file.ext' < file.ext
Another option would be to just leave an ssh tunnel open:
In ~/.ssh/config:
Host *
ControlMaster auto
ControlPath ~/.ssh/sockets/ssh-socket-%r-%h-%p
.
$ ssh -f -N -l user host
(socket is now open)
Subsequent ssh/scp requests will reuse the already existing tunnel.
Here is bash script template, which follows #Wrikken's second method, but can be used as is - no need to edit user's SSH config file:
#!/bin/bash
TARGET_ADDRESS=$1 # the first script argument
HOST_PATH=$2 # the second script argument
TARGET_USER=root
TMP_DIR=$(mktemp -d)
SSH_CFG=$TMP_DIR/ssh-cfg
SSH_SOCKET=$TMP_DIR/ssh-socket
TARGET_PATH=/tmp/file
# Create a temporary SSH config file:
cat > "$SSH_CFG" <<ENDCFG
Host *
ControlMaster auto
ControlPath $SSH_SOCKET
ENDCFG
# Open a SSH tunnel:
ssh -F "$SSH_CFG" -f -N -l $TARGET_USER $TARGET_ADDRESS
# Upload the file:
scp -F "$SSH_CFG" "$HOST_PATH" $TARGET_USER#$TARGET_ADDRESS:"$TARGET_PATH"
# Run SSH commands:
ssh -F "$SSH_CFG" $TARGET_USER#$TARGET_ADDRESS -T <<ENDSSH
# Do something with $TARGET_PATH here
ENDSSH
# Close the SSH tunnel:
ssh -F "$SSH_CFG" -S "$SSH_SOCKET" -O exit "$TARGET_ADDRESS"

Resources