Shell: Get The Data From Remote Host And Excute Some Other Commands - linux

I need to create a shell script to do this:
ssh to another remote host
use sqlplus on that host and spool command to get the data from oracle db into a file
transfer the file from that host to my host
excute another shell script to process the data file
I have finished the 4th step shell script. Now I have to do this 4 steps one by one. I want to create a script and do them all. Is that possible? How to transfer the data from one host to my host?
I think maybe the db file is not necessary.
Note: I have to ssh to another host to use sqlplus. It is the only one host which have the permission to access database.

# steps 1 and 2
ssh remote_user#remote_host 'sqlplus db_user/db_pass#db #sql_script_that_spools'
# step 3
scp remote_user#remote_host:/path/to/spool_file local_file
# step 4
process local_file
Or
# steps 1, 2 and 3
ssh remote_user#remote_host 'sqlplus db_user/db_pass#db #sql_script_no_spool' > local_file
# step 4
process local_file
Or, all in one:
ssh remote_user#remote_host 'sqlplus db_user/db_pass#db #sql_script_no_spool' |
process_stdin

Well Glenn pretty much summed it all up.
In order to make your life easier, you may want to also consider setting up passwordless ssh. There is a slightly higher security risk associated with this, but in many cases the risk is negligible.
Here is a link to a good tutorial. It is a debian based tutorial, but the commands given should work the same on most major linux distros.

Related

How to use the "watch" command with SSH

I have a script that monitors a specific server, giving me the disk usage, CPU usage, etc. I am using 2 Ubuntu VMs: I run the script on the server using SSH (ssh user#ip < script.sh from the first VM), and I want to make it show values in real time, so I tried 2 approaches I found on here:
1. while loop with clear
The first approach is using a while loop with "clear" to make the script run multiple times, giving new values every time and clearing the previous output like so:
while true
do
clear;
# bunch of code
done
The problem here is that it doesn't clear the terminal, it just keeps printing the new results one after another.
2. watch
The second approach uses watch:
watch -n 1 Script.sh
This works fine on the local machine (to monitor the current machine where the script is), but I can't find a way to make it run via SSH. Something like
ssh user#ip < 'watch -n 1 script.sh'
works in principle, but requires that the script be present on the server, which I want to avoid. Is there any way to run watch for the remote execution (via SSH) of a script that is present on the local machine?
For your second approach (using watch), what you can do instead is to run watch locally (from within the first VM) with an SSH command and piped-in script like this:
watch -n 1 'ssh user#ip < script.sh'
The drawback of this is that it will reconnect in each watch iteration (i.e., once a second), which some server configurations might not allow. See here for how to let SSH re-use the same connection for serial ssh runs.
But if what you want to do is to monitor servers, what I really recommend is to use a monitoring system like 'telegraf'.

Run command multiple linux server

One of my tasks at work is to check the health/status of multiple Linux servers everyday. I'm thinking of a way to automate this task (without having to login to each server everyday). I'm a newbie system admin by the way. Initially, my idea was to setup a cron job that would run scripts and email the output. Unfortunately, it's not possible to send mail from the servers as of the moment.
I was thinking of running the command in parallel, but I don't know how. For example, how can I see output of df -h without logging in to servers one by one.
You can run ssh with the -t flag to open a ssh session, run a command and then close the session. But to get this fully automated you should automate the login process to every server so that you don't need to type the password for every server.
So to run df -hon a remote server and then close the session you would run ssh -t root#server.com "df -h". Then you can process that output however you want.
One way of automating this could be to write a bash script that runs this command for every server and process the output to check the health of the server.
For further information about the -t flag or how you can automate the login process for ssh.
https://www.novell.com/coolsolutions/tip/16747.html
https://serverfault.com/questions/241588/how-to-automate-ssh-login-with-password
You can use ssh tunnels or just simply ssh for this purpose. With ssh tunnel you can redirect the outputs to your machine, or as an alternative, you can run the ssh with the remote commands on your machine then get the ouput on your machine too.
Please check the following pages for further reading:
http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html
https://www.google.hu/amp/s/www.cyberciti.biz/faq/unix-linux-execute-command-using-ssh/amp/
If you want to avoid manual login, use ssh keys.
Create a file /etc/sxx/hosts
populate like so:
[grp_ips]
1.1.1.1
2.2.2.2
3.3.3.3
share ssh key on all machines.
Install sxx from package:
https://github.com/ericcurtin/sxx/releases
Then run command like so:
sxx username#grp_ips "whatever bash command"

How do I run a bash script (that resides on a remote server) in windows task scheduler?

SOLVED
Scenario: I am a beginner in bash script, windows task scheduler and such. I am able to run a local bash script in my Windows Task Scheduler successfully.
Problem: I need to do this on many computers, thus I think storing just 1 copy of the bash script on a remote server may be of help. What my Task Scheduler needs to do is just to run the script and output a log. However, I can't get the correct syntax for the argument.
The below is what I have currently:
Program/Script: C:\cygwin64\bin\bash.exe
Argument (works successfully):
-l -c "ssh -p 222 ME#ME.com "httpdocs/bashscript.sh" >> /cygdrive/c/Users/ME/Desktop/`date +%Y%m%d`.log 2>&1"
Start in: C:\cygwin64\bin
Also had to make sure that the user account under Properties in Task Scheduler is correct, as mine was incorrect before. And need key authentication for ME#ME.com too.
For the password issue, you really should use ssh keys. I think your command would simply be ssh -p 222 ME#ME.com:.... I.e., just get rid of the --rsh stuff. – chrisaycock

efficient way to execute command when instructed

What is the best and secure way for a terminal to ping a server for a list of commands to execute every 60 secs? For example, it could download a file (that houses the command) or query a database and then execute what is on there.
Are there more efficient/secure ways to accomplish the above?
Thanks
If you want to make it into a script:
commands.ssh
echo "This will run on the remote machine."
# Do a backup or something...
Then you can execute pass this file to the remote machine using:
ssh user#remote -i id_rsa < commands.ssh
I recommend using an sshkey so that you don't have to keep your login information in the commands file.
Note: make sure the permissions for the commands.ssh file are secure!
chmod 600 commands.ssh
You can use SSH connections which are SSL enabled. If commands are predefined you can depend on a cron job, then you don't need to login to terminal again and again to run it.

How to connect to multiple servers to run the same query?

I have 4 servers where we have log files in same pattern. For every serch/query I need to login to all servers one by one and execute the command.
Is it possible to provide some command, so that it will login to all those servers one by one automatically and will fetch the output from each server?
What configuration, settings etc I have to do to make it working.
I am new to Linux Domain.
As suggested in your question comments, there are a number of tools to help you in performing a task on multiple machines. I will add to this list and suggest Ansible. It is designed to perform all of the interactions over ssh, in quite a simple manner, and with very little configuration.
https://github.com/ansible/ansible
If you were to have server-1 and server-2 defined in your ~/.ssh/config file, then the ansible configuration would be as simple as
[myservers]
server-1
server-2
Then to run a command on the group
$ ansible myservers -a uptime
If your servers are called eenie, meanie, minie, and moe, you simply do
for server in eenie meanie minie moe; do
ssh "$server" grep 'intrusion attempt' /var/log/firewall.log
done
The grep command won't reveal from which server it is reporting a result; maybe replace it with ssh "$server" sed -n "/intrusion attempt/s/^/$server: /p" /var/log/firewall.log
Use https://sealion.com. You just have to execute one script and it will install the agent in your servers and start collecting output. It has a convenient web interface to see the output across all your servers.

Resources