I am completely new to scripting and am having some trouble piecing this together from some other online resources.
What I want to do is run a bash script that will grep for a keyword domain in the /etc/hosts file on multiple servers. In the output file, I am looking for a list of the servers that contain this keyword but am not looking to make any changes. Simply looking for which machines have this value. Since there are a bunch of machines in question, listing the servers I am looking to search for won't work, but the machine I am doing this from does have SSH keys for all of the ones in question.
I have a listing of the servers I want to query in three files on the machine (one for each environment) I am going to run this script from.
Linux.prod.dat
Linux.qa.dat
Linux.dev.dat
Each file is simply a list of server names in the environment. For example..
server1
server2
server3
etc...
I am totally lost here and would appreciate any help.
Here is an example:
KEYWORD=foo
SERVERLIST=Linux.prod.dat
OUTPUTLIST=output.dat
for host in $(cat ${SERVERLIST}); do
if [[ -n "$(ssh ${host} grep '${KEYWORD}' /etc/hosts && echo Y)" ]]; then
echo ${host} >> ${OUTPUTLIST}
fi
done
Try GNU parallel
parallel --tag ssh {} grep -l "KEYWORD" /etc/hosts :::: Linux.prod.dat
parallel run command multiple times substituting{}with lines from Linux.prod.dat file.
--tag switch adds value from the Linux.prod.dat on the beginning of the file. So, the output of the command will look like:
server1 /etc/hosts
server5 /etc/hosts
server7 /etc/hosts
Where server1, server5, etc. will be names of the servers where /etc/hosts contains KEYWORD
Related
I want to create script, which one logging information from different IP's and in the same time it writes logs to different file, it should run like while:true, but when i start script it logs only first ip address in text file, what i already tried:
#!/bin/bash
IP=`cat IP.txt`
for i in $IP
do
/usr/bin/logclient -l all -f /root/$i.log $i 19999
done
IP.txt file contains:
x.x.x.x
x.x.x.x
x.x.x.x
x.x.x.x
It looks like your script should work as-is, and if logclient works like I think, it'll just create a number of different logs for each IP address. Doing a ls /root/*.log should reveal all the logs generated.
Parallelizing execution isn't something bash is particularly good at. It has job control for backgrounding tasks, but keeping track of those processes and not overloading your CPU/RAM can be tough.
GNU Parallel
If your system has it installed, I'd greatly suggest using GNU parallel. It will kick off one process for each CPU core to make parellizing jobs much easier. parallel only exits when all the children exit.
parallel /usr/bin/logclient -l all -f /root/{}.log {} 19999 ::::+ IP.txt
# all jobs finished, post-process the log (if wanted)
cat /root/*.log >> /root/all-ips.log
rather use while, than for. Something like this:
while read LINE; do /usr/bin/logclient -l all -f /root/$LINE.log $LINE 19999; done < IP.txt
I'd like to create a bash script to automatically connect myself to a bunch of servers, execute some commands there and save the output of these commands in one logfile on the server I use to connect myself to all the other servers.
So far I was able to create a logfile on each of the servers I'm connecting myself to or to display the output of each of the commands on the console of the server I use to get to all the other servers.
My script currently looks like this (I know about for loops, but I don't want to use them in this case because I need to execute different commands on each server):
#!/bin/bash
ssh server1 <<EOF
hostname
printf '\n'
mount
EOF
printf '\n'
printf '\n'
printf '\n'
ssh server2 <<EOF
hostname
printf '\n'
mount
EOF
...
My idea was to use the &>> operator, because I need to know if all commands where executed successfully or not. In the end I'd like to have only one logfile which should look somewhat like this:
server1
output of mount
server 2
output of mount
...
So, how can I manage to create only one large logfile that contains the results of all executed commands? Also, will this script still work correctly if I make use of the ssh -T option to get rid of the message "Pseudo-terminal will not be allocated because stdin is not a terminal."? And do I have to escape special characters like / _ - when using mount in my script to mount something?
Thanks in advance!
I suggest using Open source utilities like logstash or fluentd.
I would use fabric, which is a tool to interact with several servers using ssh. It provides operations for executing remote shell commands.
For your example, the fabfile:
from fabric.api import run, sudo
def my-task():
run('hostname')
run('mount')
An you can execute it:
fab -H server1,server2 my-task
Output will be via standard output of the server you are executing so you can easily redirect it to a file:
fab -H server1,server2 my-task | my-task.log
I am working within a company and require myself to be added onto different branch servers. The current way of doing this is:
sudo /usr/local/bin/sd-adduser test "Test User"
This needs to be done individually logging into each server manually - which is about 20 servers. I vaguely know of expect which allows you to do add a user to multiple servers? Could anyone point me in the right direction? Or provide me the script to do this.
Any help is appreciated.
Sounds like multi-ssh could help you or pssh or pdsh.
In the long run you probably want a central user management like LDAP.
Routine administration tasks such as this can be done using a script that reads a list of server names and runs a command. Something like this "each-host" script:
#!/bin/sh
for server in $(cat mylist)
do
ssh -t $server "$#"
done
where mylist is a file containing the list of servers.
Thus
each-host sudo /usr/local/bin/sd-adduser test "Test User"
would run the OP's command on each host. Once you get that working, you could tidy up a little, making it less verbose (not printing /etc/motd);
#!/bin/sh
for server in $(cat mylist)
do
echo "** $server"
ssh -q -t $server "$#"
done
So I'm ssh'ing into a router that has several VM's. It is setup using LDAP so that each VM has the same files, settings, etc. However they have different cores allocated, different libraries and packages installed. Instead of logging into each VM individually and running the command, I want to automate it by putting the script in .bashrc.
So what I have so far:
export LD_LIBRARY_PATH=/lhome/username
# .so files are in ~/ to avoid permission denied problems
output=$(cat /proc/cpuinfo | grep "^cpu cores" | uniq | tail -c 2)
current=server_name
if [[ `hostname-s` != $current ]]; then
ssh $current
fi
/path/to/program --hostname $(echo $(hostname -s)) --threads $((output*2))
Each VM, upon logging in, will execute this script, so I have to check if the current VM has the hostname to avoid an SSH loop. The idea is to run the program, then exit back out to the origin to resume the script. The problem is of course that the process will die upon logging out.
It's been suggested to me to use TMUX on an array of the hostnames, but I would have no idea on how to approach this.
You could install clusterSSH, set up a list of hostnames, and execute things from the terminal windows opened. You may use screen/tmux/nohup to allow processes started to keep running, even after logout.
Yet, if you still want to play around with scripting, you may install tmux, and use:
while read host; do
scp "script_to_run_remotely" ${host}:~/
ssh ${host} tmux new-session -d '~/script_to_run_remotely'\; detach
done < hostlist
Note: hostlist should be a list of hostnames, one per line.
I have 20 machines, each running a process. The machines are named:
["machine1", "machine2", ...., "machine20"]
To inspect how the process is doing on machine1, I issue the following command from a remote machine:
ssh machine1 cat log.txt
For machine2, I issue the following command:
ssh machine2 cat log.txt
Similarly, for machine20, I issue the following command:
ssh machine20 cat log.txt
Is there a bash command that will allow me to view the output from all machines using one command?
If the machines are nicely numbered like in your example:
for i in {1..20} ; do ssh machine$i cat log.txt; done
If you have the list of machines in a file, you can use:
cat machinesList.txt | xargs -i ssh {} cat log.txt
You could store all your machine names in an array or text file, and loop through it.
declare -a machineList=('host1' 'host2' 'otherHost') # and more...
for machine in ${machineList[#]}
do
ssh $machine cat log.txt
done
I assume your machines aren't literally named 'machine1', 'machine2', etc.
Some links:
bash Array Tutorial
GNU Bash Array Documentation
for i in {1..20}
do
ssh machine$i cat log.txt
done
Use a loop?
for i in {1..20}
do
ssh machine$i cat log.txt
done
But note that you're running cat within a remote shell session, not the current one, so this might not quite work as you expect. Try it and see.
Put your hosts in a file and use a while loop as shown below. Note the use of the -n flag on ssh:
while read host; do ssh -n $host cat log.txt; done < hosts-file
Alternatively you can use PSSH:
pssh -h hosts-file -i "cat log.txt"
I would recommend using a program called Shmux. Despite the name, it works really well. I've used it with more than 100 machines with good results. It also gracefully handles machine failures for you which could be a disadvantage with a bash for loop approach.
I think the coolest thing about this program is the ability to issue multiple threads for your commands which allows to run the commands on all 20 machines in parallel.
Aside from the suggestions for using a loop, you might want to take a look at tools, like pssh or dsh, designed for running commands on multiple clients.