One of my tasks at work is to check the health/status of multiple Linux servers everyday. I'm thinking of a way to automate this task (without having to login to each server everyday). I'm a newbie system admin by the way. Initially, my idea was to setup a cron job that would run scripts and email the output. Unfortunately, it's not possible to send mail from the servers as of the moment.
I was thinking of running the command in parallel, but I don't know how. For example, how can I see output of df -h without logging in to servers one by one.
You can run ssh with the -t flag to open a ssh session, run a command and then close the session. But to get this fully automated you should automate the login process to every server so that you don't need to type the password for every server.
So to run df -hon a remote server and then close the session you would run ssh -t root#server.com "df -h". Then you can process that output however you want.
One way of automating this could be to write a bash script that runs this command for every server and process the output to check the health of the server.
For further information about the -t flag or how you can automate the login process for ssh.
https://www.novell.com/coolsolutions/tip/16747.html
https://serverfault.com/questions/241588/how-to-automate-ssh-login-with-password
You can use ssh tunnels or just simply ssh for this purpose. With ssh tunnel you can redirect the outputs to your machine, or as an alternative, you can run the ssh with the remote commands on your machine then get the ouput on your machine too.
Please check the following pages for further reading:
http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html
https://www.google.hu/amp/s/www.cyberciti.biz/faq/unix-linux-execute-command-using-ssh/amp/
If you want to avoid manual login, use ssh keys.
Create a file /etc/sxx/hosts
populate like so:
[grp_ips]
1.1.1.1
2.2.2.2
3.3.3.3
share ssh key on all machines.
Install sxx from package:
https://github.com/ericcurtin/sxx/releases
Then run command like so:
sxx username#grp_ips "whatever bash command"
I'm learning gearman and found that there are two ways to start the gearman:
sudo gearmand -d
sudo service gearman-job-server start
What's the difference?
When to use each of them?
Thanks for any feedback!
Well this is not specific to gearmand but it applies to almost all linux daemons/services.
The program/service can be invoked through different ways. Directly from the terminal, through scripts in /etc and other means. I am assuming you know what sudo does.
# gearmand -d
You are invoking gearmand executable directly. The shell knows where the executable is, because the PATH is set. You may search its location by using "whereis gearmand" or finding it with find.
This is the direct way of calling the application/service.
"daemon" is a background process. The "-d" argument to gearman starts it in daemon mode (in background).
Advantage/s:
If you compile multiple version of the service on the same machine, in this case "gearman", you can invoke them individually without installing/reinstalling.
Sometimes the installation doesn't work or the service might not support startup scripts etc.
Disadvantage/s:
May not give a uniform output like standard scripts / commands.
You may need to know the location of the file.
# service gearman-job-server start
calls the script service which usually looks into the directory "/etc/init.d". If you wish to find where service is searching for the services in your linux distribution, you can look it up.
Search the location of service script "whereis service" then open it in less by "less path_to_service" or directly by "whereis service | cut -d " " -f2 | xargs less" to see the service file.
The service script sort of standardizes the way scripts are called in linux these days.
$ service service_name start
service_name started
$ service service_name start
service_name already running
$ service service_name stop
service_name stopped.
$ service service_name stop
service_name not running.
This provides a uniform way of starting or stopping all services.
I have this protocol port open to read remotely from Python, PHP applications but daily it crash and the port is unavailable as a result Python, PHP all client application fails
$ cat /var/tmp/server.sh
#!/bin/bash
while true; do tail -f /usr/local/freeswitch/log/freeswitch.log | nc -l -p 9999 -q 1 &
Q. Is there anyway to make this script always running like service this start or stop and if its crashed that somehow it automatically again get restarted ? Any advise or link to do such thing? i am using CentOS 6.x
Put your script in /etc/inittab as following
id:1:respawn:/var/tmp/server.sh
Refer to http://linux.about.com/od/commands/l/blcmdl5_inittab.htm for more information about the /etc/initab file.
After editing /etc/inittab restart your system.
I have 4 servers where we have log files in same pattern. For every serch/query I need to login to all servers one by one and execute the command.
Is it possible to provide some command, so that it will login to all those servers one by one automatically and will fetch the output from each server?
What configuration, settings etc I have to do to make it working.
I am new to Linux Domain.
As suggested in your question comments, there are a number of tools to help you in performing a task on multiple machines. I will add to this list and suggest Ansible. It is designed to perform all of the interactions over ssh, in quite a simple manner, and with very little configuration.
https://github.com/ansible/ansible
If you were to have server-1 and server-2 defined in your ~/.ssh/config file, then the ansible configuration would be as simple as
[myservers]
server-1
server-2
Then to run a command on the group
$ ansible myservers -a uptime
If your servers are called eenie, meanie, minie, and moe, you simply do
for server in eenie meanie minie moe; do
ssh "$server" grep 'intrusion attempt' /var/log/firewall.log
done
The grep command won't reveal from which server it is reporting a result; maybe replace it with ssh "$server" sed -n "/intrusion attempt/s/^/$server: /p" /var/log/firewall.log
Use https://sealion.com. You just have to execute one script and it will install the agent in your servers and start collecting output. It has a convenient web interface to see the output across all your servers.
I have two questions:
There are multiple remote linux machines, and I need to write a shell script which will execute the same set of commands in each machine. (Including some sudo operations). How can this be done using shell scripting?
When ssh'ing to the remote machine, how to handle when it prompts for RSA fingerprint authentication.
The remote machines are VMs created on the run and I just have their IPs. So, I cant place a script file beforehand in those machines and execute them from my machine.
There are multiple remote linux machines, and I need to write a shell script which will execute the same set of commands in each machine. (Including some sudo operations). How can this be done using shell scripting?
You can do this with ssh, for example:
#!/bin/bash
USERNAME=someUser
HOSTS="host1 host2 host3"
SCRIPT="pwd; ls"
for HOSTNAME in ${HOSTS} ; do
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
done
When ssh'ing to the remote machine, how to handle when it prompts for RSA fingerprint authentication.
You can add the StrictHostKeyChecking=no option to ssh:
ssh -o StrictHostKeyChecking=no -l username hostname "pwd; ls"
This will disable the host key check and automatically add the host key to the list of known hosts. If you do not want to have the host added to the known hosts file, add the option -o UserKnownHostsFile=/dev/null.
Note that this disables certain security checks, for example protection against man-in-the-middle attack. It should therefore not be applied in a security sensitive environment.
Install sshpass using, apt-get install sshpass then edit the script and put your linux machines IPs, usernames and password in respective order. After that run that script. Thats it ! This script will install VLC in all systems.
#!/bin/bash
SCRIPT="cd Desktop; pwd; echo -e 'PASSWORD' | sudo -S apt-get install vlc"
HOSTS=("192.168.1.121" "192.168.1.122" "192.168.1.123")
USERNAMES=("username1" "username2" "username3")
PASSWORDS=("password1" "password2" "password3")
for i in ${!HOSTS[*]} ; do
echo ${HOSTS[i]}
SCR=${SCRIPT/PASSWORD/${PASSWORDS[i]}}
sshpass -p ${PASSWORDS[i]} ssh -l ${USERNAMES[i]} ${HOSTS[i]} "${SCR}"
done
This work for me.
Syntax : ssh -i pemfile.pem user_name#ip_address 'command_1 ; command 2; command 3'
#! /bin/bash
echo "########### connecting to server and run commands in sequence ###########"
ssh -i ~/.ssh/ec2_instance.pem ubuntu#ip_address 'touch a.txt; touch b.txt; sudo systemctl status tomcat.service'
There are a number of ways to handle this.
My favorite way is to install http://pamsshagentauth.sourceforge.net/ on the remote systems and also your own public key. (Figure out a way to get these installed on the VM, somehow you got an entire Unix system installed, what's a couple more files?)
With your ssh agent forwarded, you can now log in to every system without a password.
And even better, that pam module will authenticate for sudo with your ssh key pair so you can run with root (or any other user's) rights as needed.
You don't need to worry about the host key interaction. If the input is not a terminal then ssh will just limit your ability to forward agents and authenticate with passwords.
You should also look into packages like Capistrano. Definitely look around that site; it has an introduction to remote scripting.
Individual script lines might look something like this:
ssh remote-system-name command arguments ... # so, for exmaple,
ssh target.mycorp.net sudo puppet apply
The accepted answer sshes to machines sequentially. In case you want to ssh to multiple machines and run some long-running commands like scp concurrently on them, run the ssh command as a background process.
#!/bin/bash
username="user"
servers=("srv-001" "srv-002" "srv-002" "srv-003");
script="pwd;"
for s in "${servers[#]}"; do
echo "sshing ${username}#${s} to run ${script}"
(ssh ${username}#${s} ${script})& # Run in background
done
wait # If removed, you can run some other script here
If you are able to write Perl code, then you should consider using Net::OpenSSH::Parallel.
You would be able to describe the actions that have to be run in every host in a declarative manner and the module will take care of all the scary details. Running commands through sudo is also supported.
For this kind of tasks, I repeatedly use Ansible which allows to duplicate coherently bash scripts in several containets or VM. Ansible (more precisely Red Hat) now has an additional web interface AWX which is the open-source edition of their commercial Tower.
Ansible: https://www.ansible.com/
AWX:https://github.com/ansible/awx
Ansible Tower: commercial product, you will probably fist explore the free open-source AWX, rather than the 15days free-trail of Tower
There is are multiple ways to execute the commands or script in the multiple remote Linux machines.
One simple & easiest way is via pssh (parallel ssh program)
pssh: is a program for executing ssh in parallel on a number of hosts. It provides features such as sending input to all of the processes, passing a password to ssh, saving the output to files, and timing out.
Example & Usage:
Connect to host1 and host2, and print "hello, world" from each:
pssh -i -H "host1 host2" echo "hello, world"
Run commands via a script on multiple servers:
pssh -h hosts.txt -P -I<./commands.sh
Usage & run a command without checking or saving host keys:
pssh -h hostname_ip.txt -x '-q -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey -o PubkeyAuthentication=yes' -i 'uptime; hostname -f'
If the file hosts.txt has a large number of entries, say 100, then the parallelism option may also be set to 100 to ensure that the commands are run concurrently:
pssh -i -h hosts.txt -p 100 -t 0 sleep 10000
Options:
-I: Read input and sends to each ssh process.
-P: Tells pssh to display output as it arrives.
-h: Reads the host's file.
-H : [user#]host[:port] for single-host.
-i: Display standard output and standard error as each host completes
-x args: Passes extra SSH command-line arguments
-o option: Can be used to give options in the format used in the configuration file.(/etc/ssh/ssh_config) (~/.ssh/config)
-p parallelism: Use the given number as the maximum number of concurrent connections
-q Quiet mode: Causes most warning and diagnostic messages to be suppressed.
-t: Make connections time out after the given number of seconds. 0 means pssh will not timeout any connections
When ssh'ing to the remote machine, how to handle when it prompts for
RSA fingerprint authentication.
Disable the StrictHostKeyChecking to handle the RSA authentication prompt.
-o StrictHostKeyChecking=no
Source: man pssh
This worked for me. I made a function. Put this in your shell script:
sshcmd(){
ssh $1#$2 $3
}
sshcmd USER HOST COMMAND
If you have multiple machines that you want to do the same command on you would repeat that line with a semi colon. For example, if you have two machines you would do this:
sshcmd USER HOST COMMAND ; sshcmd USER HOST COMMAND
Replace USER with the user of the computer. Replace HOST with the name of the computer. Replace COMMAND with the command you want to do on the computer.
Hope this helps!
You can follow this approach :
Connect to remote machine using Expect Script. If your machine doesn't support expect you can download the same. Writing Expect script is very easy (google to get help on this)
Put all the action which needs to be performed on remote server in a shell script.
Invoke remote shell script from expect script once login is successful.