ssh mysqldump to remote nas storage with cron storing command(s) in .sh file - linux

I'm trying to set up mysql database backups with cron in order to backup the mysql database to my local NAS storage. I would like to store the command(s) in the .sh file on the server and then use cron to execute it.
Up to now I've managed to get the command to save the database to my NAS (QNAP) from the remote server, which is:
mysqldump
--add-drop-table
--comments [database_name]
-u [database_username]
-p[database_password] |
gzip -c |
ssh [nas_user]#[nas_ip_address]
"cat > /share/mysqlBackup/backup-`date +%Y-%m-%d_%H-%M-%S`.sql.gz"
The above works fine, but the problems I have are:
I'm not sure how to create the .sh file on the remote server and put
the command in
This command asks for the password each time you
execute it - is there a way to put it in the .sh file so that i can
be executed in the background without prompting for it / or define
the password in the command?
Examples of how to solve the above would be very welcome.
I believe that the expect() dialog could be used, but again - I'm not familiar with it and its documentation is a bit confusing for me.

I guess password is asked for ssh connection, so you can make your ssh connection passwordless.
Here in the answer passwordless ssh connection is explained:
https://serverfault.com/questions/241588/how-to-automate-ssh-login-with-password
After this step is done on your remote server rest is pretty straightforward you write the command you give above in an .sh file. And add it to cron to do this backing up periodically.

Related

Run command multiple linux server

One of my tasks at work is to check the health/status of multiple Linux servers everyday. I'm thinking of a way to automate this task (without having to login to each server everyday). I'm a newbie system admin by the way. Initially, my idea was to setup a cron job that would run scripts and email the output. Unfortunately, it's not possible to send mail from the servers as of the moment.
I was thinking of running the command in parallel, but I don't know how. For example, how can I see output of df -h without logging in to servers one by one.
You can run ssh with the -t flag to open a ssh session, run a command and then close the session. But to get this fully automated you should automate the login process to every server so that you don't need to type the password for every server.
So to run df -hon a remote server and then close the session you would run ssh -t root#server.com "df -h". Then you can process that output however you want.
One way of automating this could be to write a bash script that runs this command for every server and process the output to check the health of the server.
For further information about the -t flag or how you can automate the login process for ssh.
https://www.novell.com/coolsolutions/tip/16747.html
https://serverfault.com/questions/241588/how-to-automate-ssh-login-with-password
You can use ssh tunnels or just simply ssh for this purpose. With ssh tunnel you can redirect the outputs to your machine, or as an alternative, you can run the ssh with the remote commands on your machine then get the ouput on your machine too.
Please check the following pages for further reading:
http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html
https://www.google.hu/amp/s/www.cyberciti.biz/faq/unix-linux-execute-command-using-ssh/amp/
If you want to avoid manual login, use ssh keys.
Create a file /etc/sxx/hosts
populate like so:
[grp_ips]
1.1.1.1
2.2.2.2
3.3.3.3
share ssh key on all machines.
Install sxx from package:
https://github.com/ericcurtin/sxx/releases
Then run command like so:
sxx username#grp_ips "whatever bash command"

Running history command remotely in linux

My requirement is to save history of the commands into a file called history_yymmdd.txt by running the following command on a remote server.
history > history_20170523.txt
I tried with the following command, but it was creating a blank file on remote server(10.12.13.14).
ssh 10.12.13.14 "history > history_20170523.txt"
When I log in to the remote server and run the history command, then the file was created successfully. But I need to run the command on 20 servers so creating a script to run it remotely on each server is my objective here.
Thanks in advance.
ssh user#machine_name "cat ~/.bash_history > history_20170523.txt"
The 'history' command dumps the contents of .bash_history, so this may be useful to you.
A more elegant solution might be:
scp user#machine_name:~/.bash_history history_20170523.txt
you are doing it in a wrong way, also there is no user for the remote machine. Correct way to do is
ssh -q -tt user#10.12.13.14 'history > history_20170523.txt'

Using local system as ssh client and server

I am using local system to learn ssh and what I am trying to do is execute a command on the remote server.
I have ssh server running on terminal1 and client on terminal2.
I used the following command on terminal2:
ssh user1#127.0.0.1 echo Display this.
but it echoes on terminal2. How would I know if the command actually worked if it's not displaying in terminal1?
Thank you.
It worked correctly. It ssh'd into the server, executed the command, and returned the stdout of that command back to you.
SSH gains access to the server, but not necessarily any TTY's active on it. You would have to jump through some hoops to send text to a specific TTY, such as your Terminal1.
A better test would be:
ssh user1#127.0.0.1 'touch ~/testfile'
Then you can check on your server (which is localhost) to see if testfile was created in your user1 home folder. If it did, then the connection and the command succeeded.

efficient way to execute command when instructed

What is the best and secure way for a terminal to ping a server for a list of commands to execute every 60 secs? For example, it could download a file (that houses the command) or query a database and then execute what is on there.
Are there more efficient/secure ways to accomplish the above?
Thanks
If you want to make it into a script:
commands.ssh
echo "This will run on the remote machine."
# Do a backup or something...
Then you can execute pass this file to the remote machine using:
ssh user#remote -i id_rsa < commands.ssh
I recommend using an sshkey so that you don't have to keep your login information in the commands file.
Note: make sure the permissions for the commands.ssh file are secure!
chmod 600 commands.ssh
You can use SSH connections which are SSL enabled. If commands are predefined you can depend on a cron job, then you don't need to login to terminal again and again to run it.

ssh without key and collect the output using bash script

I want to create a bash script that will login to all the linux servers in my network using ssh and collect the output of 'uptime' command to a local file. There is no keypair installed between these local server and the remote servers. So I need to give the password (username and password is same for all the remote servers) in the script itself. I know this is not a secure way to do it, but it is just for learning purpose. I see 'expect' command can be used for the ssh login with password but confused how to use it together with the 'uptime' command that provide the server status. So my requirement is
1. I have local server test1 which contains a text file 'server_status.txt'
2. I need a script in test1 that will try to login to all the remote servers (say 192.168.0.1 to 192.168.0.50) using the same username and password. It will execute the command 'uptime' once logged in to the remote servers and store the output to the local file 'server_status.txt'
REVOKE: paste your public key into the server's /path2userthatshouldlogon/.ssh/authorized_keys and run the your commands remotely using ssh user#host commandtoexecute
due to connection wanted to be established without key.
UPDATE: have a look at sshpass if you really want to need passwords, which is NOT RECOMMENDED
Note: Doing this is poor practice. If you are testing around with this then you are learning a bad habit. Don't do this in production on servers you care about.
You'll want to execute the expect call as a $? and be sure to store the $USER and $SERVER variables or just replace them:
uptime=$(expect -c 'spawn ssh $USER#$SERVER send "uptime"; exit;')
echo $uptime

Resources