Display stats from remote linux server - linux

My intention is to display stats like Load avg and RAM usage from a remote server onto an LCD panel 24/7. I would like the information to be updated once every 1-3 seconds.
Other threads have suggested using SSH commands to retrieve the information.
ssh root#192.168.1.252 uptime && cat /proc/meminfo
Is using SSH suitable for my purposes, my concern is that
my log files may be bloated because of all the login attempts
overhead of setting up and tearing down an SSH connection every few second.
Is there any such package out there or do I have to code it myself? I would prefer one that keeps the connection open to reduce overhead. I do not require encryption as both servers are on LAN.
Thanks in advance.

Several things to note:
Don't use root if you don't need to. For uptime and cat /proc/meminfo you certainly don't need root. Use another user.
Note the difference between these two:
ssh user#hostname uptime && cat /proc/meminfo
ssh user#hostname 'uptime && cat /proc/meminfo'
The first one will execute cat /proc/meminfo on your local machine, the second will execute it on the remote. I think you want to use the second version. (You want the CPU info of the remote machine, not your local machine, do you?)
You can use connection multiplexing to hit two birds with one stone: reduce the overhead of establishing new connections and avoid polluting the server log. To do this, add a configuration like this in your ~/.ssh/config file:
Host somename
User the_username
Hostname the_hostname
ControlMaster auto
ControlPath ~/.ssh/master-somename
You can choose any somename, it's like an alias. With this setting, you can connect to the server simply as:
ssh somename
While this remote session is still alive (until you logout), you can open new connections from another terminal, and they will reuse the existing connection, bypassing authentication and effectively eliminating the overhead of new connections.
This is actually a common trick when working with slow remote servers where establishing new connections is a noticeable overhead. In fact I use this setting to apply it to all remote servers I work with:
Host *
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h:%p
I usually recommend this trick for everyone.

Related

SSH interception - Linux

Really hoping someone here can point me in the right direction,
Expected result: SSH successfully into a remote device.
Challenge/Back story:
We have devices out in remote places around the country,
These devices do not have a fixed public IP address
(Using GSM as its internet breakout)
These devices are able to SSH and break out.
My thought, with regards to maintaining these devices is to (if possible) use a server in the cloud as a middle man, have these devices create some sort of a reverse tunnel to our middleman server then have us as admins intercept it or something to that effect.
Again to summarize, Device cannot be ssh'd into directly, but can breakout.
Aim to be able to hit their terminal from the office.
have been looking at mitmssh but not coming right on that front.
Server A (no fixed address, cannot SSH into it directly but has breakout)
Server B (standard server which can be used as a middle man
Server C (Us admins)
Tried something along the lines of "ssh user#serverA -R serverB:12345:ServerA:22"
Which creates the tunnel, but struggling with grabbing hold of that SSH connection.
I think I regularly use something very similar. My target machine connects to the machine with a stable address with:
ssh midpoint -R 2022:localhost:22
my ~/.ssh/config file knows the real HostName. My config file on my work machine defines a ProxyCommand option to use this tunnelled TCP connection. like:
Host target
ProxyCommand ssh -q midpoint nc localhost 2022
the reason for using netcat was to get ssh-agent behaving.
I've just been searching around and it seems OpenSSH now has specific handling for this (-W command line option, and JumpHost in the config file). E.g. https://stackoverflow.com/a/29176698/1358308

Is mysqldump secure

I would like to start this discussion about mysqldump security.
With security I'm not speaking about Cron tasks that display password security or password security in any way, instead I'm talink about the security of the command itself.
In my particular case I have setup the command to execute on my home server the cron job with mysqldump and backup my website database on my VPS that I have with 1&1.
So basically the scenario is that my Home PC is backing up remotely the MySQL database on port 3306.
This work correctly but I start making nightmares while sleeping and thinking that maybe could someone listen on port 3306 and get all my data while I'm backing up (with mysqldump) I mean for what I have understanded mysql is not under SSL with port 3306 so anybody could potentially get the backup copy from the database?
I mean it would be possible this scenario:
My Home PC start mysqldump task
My VPS on 1&1 prepare remotely the sql dump
My Home PC receive locally the dump from the remote server
between point 2 and point 3 is possible that someone get a copy of my file?
Thanks in advance for the answers
Marcos
You should not expose port 3306 on your VPS host to the public internet. MySQL's unencrypted port is not secure.
If you're running mysqldump on your VPS host, and only transferring the resulting dump file to your PC, then you can do this securely.
If you can ssh to your VPS, you should be able to use scp too. This gives you the ability to transfer files securely.
Here's a FAQ article about using scp with 1&1. I found this by googling for "1&1 scp":
http://faq.1and1.co.uk/server/root_server/linux_recovery/9.html
If you need to run mysqldump on your Home PC and connect remotely to MySQL on the VPS host, you have options:
Run mysqldump on the PC with SSL connection options.
Open an port-forwarding ssh tunnel, then run mysqldump on the PC connecting to the forwarded port.
Run ssh to invoke mysqldump on the VPS, then capture output. See example in the accepted answer to this question: https://serverfault.com/questions/36467/temporary-ssh-tunnel-for-backup-purposes
Create a VPN and do anything you want because it's all encrypted.
Re your comments of 10/11:
I need to execute the command from home PC to backup the VPS remotely.
I want to ... receive instead the backup file directly so in the VPS should be saved nothing.
Okay, here's what you can do, without exposing port 3306:
$ ssh marcos#192.168.1.3 'mysqldump ...options.. | gzip -c' > ~/dump.sql.gz
Notice the position of quotes in that command. You're executing on the VPS the command: mysqldump ...options.. | gzip -c. The stdout of that command is a gzipped stream of the dump. That stream is returned via ssh, and then > saves the output locally in the shell on your PC.
Re your comment of 10/13:
now I'm storing on the server an open text file that contain the credentials to access the MySQL server. I mean if someone will break into the server it will be able not just to damage the server content but also to damage and stolen MySQL database and informations. Am I right?
If you use MySQL 5.6 you can use the new feature to store connection credentials in a semi-encrypted manner. See http://dev.mysql.com/doc/refman/5.6/en/mysql-config-editor.html
If you use MySQL 5.5 or earlier, then you're right, you should be careful to restrict the file permissions of my.cnf. Mode 600 should be enough (i.e. it's not an executable file).
But if someone breaks into your server, they may have broken in with root access, in which case nothing can restrict what files they read.
MySQL doesn't have enough security to block access if someone gains root access, so it's up to you to use other means to prevent breakins. Firewalls, etc.
Yes it's possible, but you don't mention how you gonna fetch that data. If you use ssh/scp (with dedicated user for dumps, IP filtered, auth based on private key with key password) is acceptable and consider as safe in my opinion. Another fast way is be a more secure is to set up VPN. Any else is paranoid level for personal use.

linux command to connect to another server using hostname and port number

what is the Linux command to connect to another server using host name and port number?
how to connect to another server using only host name and port number then check if an existing process is running? the only way i see it working is to log in to the server and run the PS command. but is there a way to do it without logging in directly to the other server and connect only with host name and port number and check the running process?
If you just want to try an arbitrary connection to a given host/port combination, you could try one nmap, telnet or nc (netcat).
Note that you can't necessarily determine whether or not a process is running remotely - it might be running on that port, but simply ignore anything it sees over the port. To really be sure, you will need to run ps or netstat or etc. via ssh or etc.
If you want to use SSH from e.g. a script or, more generally, without typing in login information, then you will want to use public key authentication. Ubuntu has some good documentation on how to set this up, and it's very much applicable to other distrobutions as well: https://help.ubuntu.com/community/SSH/OpenSSH/Keys.
If you have no access to the server you're trying to list processes on at all, then I'm afraid there isn't a way to list running processes remotely (besides remote tools like nmap and so on, as mentioned earlier - you can always probe public ports without authentication [although you might make people angry if you do this to servers you don't own]). This is a feature, not a problem.
telnet connects to most of services. With it you can ensure that port is open and see hello message (if any). Also nc is more low level.
eri#eri-macro ~ $ telnet smtp.yandex.ru 25
Trying 87.250.250.38...
Connected to smtp.yandex.ru.
Escape character is '^]'.
220 smtp16.mail.yandex.net ESMTP (Want to use Yandex.Mail for your domain? Visit http://pdd.yandex.ru)
helo
501 5.5.4 HELO requires domain address.
HELO ya.ru
250 smtp16.mail.yandex.net
MAILĀ FROM: <someusername#somecompany.ru>
502 5.5.2 Syntax error, command unrecognized.
If there is plain text protocol you cat talk with service by keyboard. If connection is secured try openssl.
openssl s_client -quiet -connect www.google.com:443
depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA
verify error:num=20:unable to get local issuer certificate
verify return:0
GET /
<HTML><HEAD>
If protocol is not known you may see much of hieroglyphs or just Connected to ... message.
Try this :
ssh <YOUR_HOST_NAME> 'ps auxwww'
Like Dark Falcon said in the comments, you need a protocol to communicate with the server, a port alone is useless in this case.
By default on unix (and unix like) servers, ssh is the way to go.
Remote Shell with this command. Example is cat a file on the remote machine.
rsh host port 'cat remotefile' >> localfile
host and port self explainitory
remotefile: name of some file on the machine remote logging to in home directory
localfile: name of file cat information to.
Use monitoring software (like Nagios). It looks at your processes, sensors, load and thatever you configured to watch. It continuously stores log. It alerts you by email\sms\jabber if something fails. You can access it with browser or by HTTP API.

Notify me when a socket binds, like inotify does for files

I am interested in finding out when things SSH into my boxen to create a reverse tunnel. Currently I'm using a big hack - just lsof with a few lines of script. So my goal is to see when a socket calls bind() and, ideally, get the port it binds to (it's listening locally since it's a reverse tunnel) and the remote host that I would be connecting to. My lsof hack is basically fine, except I don't get instant notifications and it's rather... hacky :)
This is easy for files; once a file does just about anything, inotify can tell me in Linux. Of course, other OSs have a similar capability.
I'm considering simply tailing the SSHD logs and parsing the output, but my little "tunnel monitor" daemon needs to be able to figure out the state of the tunnels at any point in time, even if it hasn't been running the whole time SSHD has.
I have a pretty evil hack I've been considering as well. It's a script that invokes GDB on /usr/sbin/sshd, then sets a breakpoint on bind. Then it runs it with the options -d -p <listening port> -- Running a separate SSHD for these tunnels is fine. Then it waits for that breakpoint to get hit, and uses GDB's input to get the remote hosts's IP address and the local IP on which SSH is now listening. Again, that's text parsing and opens some other issues.
Is there a "good" way to do this?
I would use SystemTap for a problem like this. You can use it to probe the kernel to see when a bind is done by any process on the system. http://sourceware.org/systemtap/

Is it possible to connect to several servers and search for text in a log?

I have 8 web servers and 8 app servers on each of which I need to find a particular string of text.
Typically I would connect to the server through putty and type this below to search for that text:
grep "XMemcached network layout exception" /logs/handle1.log
Is there anyway I can automate this via a script, so it will automatically connect to those servers and search for those words? Is this possible?
Log in on one of the servers and do something like
for host in web1 web2 web3 [...] app1 app2 app3 [...]; do ssh $host grep "XMemcached network layout exception" /logs/handle1.log | sed "s/^/$host: /"; done
This is just a basic one-off script; if you find yourself doing that kind of things repeatedly, consider installing clusterssh.
It is possible. I do such tasks over ssh.
Setup a ssh authentication without passwords but with certificates. On your (monitoring) computer, where you start the grep command, you need to generate a ssh key with ssh-keygen. This will create a keypair for you in your ~/.ssh/ Folder. A private, secret one an another public one. You need to copy the public key to every target computer you want to search for logs. You can copy the file with scp to the target computer. Afterwards log in into the target computer and append the content of the public key file to the file ~/.ssh/authorized_keys.
Now you should be able to login to the target computer from your monitoring computer without a password.
Please verify this by trying to log on with ssh <target computer>.
If this works you can just cat the log file over the ssh connection to your local grep command as follows:
echo `ssh <user>#<target computer> "cat <log file>"` | grep <search term>
That's only one solution, that is very easy and quickly established.
With this solution you always stay on your local monitoring computer and only retrieve the log files you want to search through.
A more comfortable solution would probably to write a little bash script.
I personally like RabbitMQ to connect boxes together for things like this.
Then, using http://sphinxsearch.com/ to help on the searching side is helpful.
Personally, I would pipe all of the logs through rabbit to a central search server. If the search traffic is substantial, maybe using a few boxes.

Resources