Is mysqldump secure - security

I would like to start this discussion about mysqldump security.
With security I'm not speaking about Cron tasks that display password security or password security in any way, instead I'm talink about the security of the command itself.
In my particular case I have setup the command to execute on my home server the cron job with mysqldump and backup my website database on my VPS that I have with 1&1.
So basically the scenario is that my Home PC is backing up remotely the MySQL database on port 3306.
This work correctly but I start making nightmares while sleeping and thinking that maybe could someone listen on port 3306 and get all my data while I'm backing up (with mysqldump) I mean for what I have understanded mysql is not under SSL with port 3306 so anybody could potentially get the backup copy from the database?
I mean it would be possible this scenario:
My Home PC start mysqldump task
My VPS on 1&1 prepare remotely the sql dump
My Home PC receive locally the dump from the remote server
between point 2 and point 3 is possible that someone get a copy of my file?
Thanks in advance for the answers
Marcos

You should not expose port 3306 on your VPS host to the public internet. MySQL's unencrypted port is not secure.
If you're running mysqldump on your VPS host, and only transferring the resulting dump file to your PC, then you can do this securely.
If you can ssh to your VPS, you should be able to use scp too. This gives you the ability to transfer files securely.
Here's a FAQ article about using scp with 1&1. I found this by googling for "1&1 scp":
http://faq.1and1.co.uk/server/root_server/linux_recovery/9.html
If you need to run mysqldump on your Home PC and connect remotely to MySQL on the VPS host, you have options:
Run mysqldump on the PC with SSL connection options.
Open an port-forwarding ssh tunnel, then run mysqldump on the PC connecting to the forwarded port.
Run ssh to invoke mysqldump on the VPS, then capture output. See example in the accepted answer to this question: https://serverfault.com/questions/36467/temporary-ssh-tunnel-for-backup-purposes
Create a VPN and do anything you want because it's all encrypted.
Re your comments of 10/11:
I need to execute the command from home PC to backup the VPS remotely.
I want to ... receive instead the backup file directly so in the VPS should be saved nothing.
Okay, here's what you can do, without exposing port 3306:
$ ssh marcos#192.168.1.3 'mysqldump ...options.. | gzip -c' > ~/dump.sql.gz
Notice the position of quotes in that command. You're executing on the VPS the command: mysqldump ...options.. | gzip -c. The stdout of that command is a gzipped stream of the dump. That stream is returned via ssh, and then > saves the output locally in the shell on your PC.
Re your comment of 10/13:
now I'm storing on the server an open text file that contain the credentials to access the MySQL server. I mean if someone will break into the server it will be able not just to damage the server content but also to damage and stolen MySQL database and informations. Am I right?
If you use MySQL 5.6 you can use the new feature to store connection credentials in a semi-encrypted manner. See http://dev.mysql.com/doc/refman/5.6/en/mysql-config-editor.html
If you use MySQL 5.5 or earlier, then you're right, you should be careful to restrict the file permissions of my.cnf. Mode 600 should be enough (i.e. it's not an executable file).
But if someone breaks into your server, they may have broken in with root access, in which case nothing can restrict what files they read.
MySQL doesn't have enough security to block access if someone gains root access, so it's up to you to use other means to prevent breakins. Firewalls, etc.

Yes it's possible, but you don't mention how you gonna fetch that data. If you use ssh/scp (with dedicated user for dumps, IP filtered, auth based on private key with key password) is acceptable and consider as safe in my opinion. Another fast way is be a more secure is to set up VPN. Any else is paranoid level for personal use.

Related

Basic question about postgresql configuration

I just installed the new version of Elementary OS and I lost the configuration that makes work my Postgresql.
I have an app that works perfectly online with a remote DB on Heroku, but when I run that on my local machine I can't reach the server. I think I miss something in the pg_hba.conf because I have all services up and running and all ports open for the DB. Actually I have this config file
# Database administrative login by Unix domain socket
local all postgres peer
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all peer
host replication all 127.0.0.1/32 md5
host replication all ::1/128 md5
host all all 0.0.0.0/0 md5
host all all ::/0 md5
I hope you can give me a way to contact my DB. In the last installation, I was able to, but I lost the config file.
Hello and thanks for the replies. I read the docs that says just how to connect to local service or configure a server that runs on a machine. I did all the steps before (just in case): user add, configuration of the local DB, giving admin user to the DB etc.
This line (the last)
host all all ::/0 md5
Is the one the docs say to add, but it is working for DB calls on the same machine.
I take web monitors, scanned ports and whatever I could see on linux system: ports for Postgresql are open, service runs, seems all fine. DB is reachable via PGAdmin, same credentials in the app. App is a NodeJS that calls the DB for an interactive website.
For those reasons I believe that should be a configuration problem. I have also no active firewall, no other rules than the actual config file for postgresql.
As I write the app works perfectly when it is on local or when it is all on the server. I need to have a cross config for development to have quick way to work on the actual online DB and a local copy (editable) of the web app. Or permit more people to develop at same time from different machines.
There is no error, just the app can't go online to get the DB and loops to find it. At last goes timeout.
Last time I had fix this with a similar line, obviously it is not the right form. What I ask is simply a line of config. I am not skilled in server configuration and I don't need to be anyway: once this will be online the server will have already configuration. I don't even care on what SQL type I will work, the app has a parser that makes all SQLs compatible.
I had to restore the system because of a problem, otherwise all was working before and I changed just that line, can't remember how...
Hope this will clear the situation.

Confluence in Docker can't see PostgreSQL in Docker

I'm trying to set up both Confluence and PostgreSQL in Docker. I've got them both up and running on my fully up to date CentOS 6 machine, with volume-mapping to the host file system so I can back them up easily. I can connect to PostgreSQL using pgAdmin from another machine just fine, and I can get into Confluence from a browser from that same machine. So, basically, both apps seem to be running as expected inside their respective containers and are accessible to the outside world, which of course eliminates a whole bunch of possibilities for my issue.
And that issue is that Confluence can't talk to PostgreSQL during initial setup, which is necessary for it to function. I'm getting connection failed errors (to be specific: "Can't reach database server or port : SQLState - 08001 org.postgresql.util.PSQLException: The connection attempt failed").
PostgreSQL is using the default 5432 port, which of course is exposed, otherwise I wouldn't be able to connect to it via pgAdmin, and of course I know the ID/password I'm trying is correct for the same reason (and besides, if it was an auth problem I wouldn't expect to see this error message). When I try to configure the database connection during Confluence's initial setup, I specify the IP address of the host machine, just like from pgAdmin on the other machine, but that doesn't work. I also tried some things that I basically knew wouldn't work (0.0.0.0, 127.0.0.1 and localhost).
I'm not sure what I need to do to make this work. Is there maybe some special method to specify the IP to a container from the same host machine, some nomenclature I'm not aware of?
At this point, I'm "okay" with Docker in terms of basic operations, but I'm far from an expert, so I'm a bit lost. I'm also not a big-time *nix user generally, though I can usually fumble my way through most things... but any hints would be greatly appreciated because I'm at a loss right now otherwise.
Thanks,
Frank
EDIT 1: As requested by someone below, here's my pg_hba.conf file, minus comments:
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
local replication all trust
host replication all 127.0.0.1/32 trust
host replication all ::1/128 trust
host all all all md5
try changing the second line of the pg_hba.conf file to the following:
host all all 0.0.0.0/32 trust
this will cause PostgreSQL to start accepting calls from any source address. Since a docker container is technically not operating on localhost but on its own ip, the current configuration causes PostgreSQL to block any connections to it.
Also check if confluence is searching for the database on localhost. If that is the case change that to the ip of the hostmachine within the docker network.
Success! The solution was to create a custom network and then use the image name in the connection string to PostreSQL container from Confluence container. In other words, I ran this:
docker network create -d bridge docker-net
Then, on both of the docker run commands for the PostgreSQL and Confluence containers, I added:
--network=docker-net
That way, when I ran through the Confluence configuration wizard, when it asked for the hostname for the PostgreSQL server, I used postgres (the name I gave the container) rather than an IP address or actual hostname. Docker makes that work thanks to the custom network. This also leaves the containers available via the IP of the host machine, so for example I can still connect to PostgreSQL via 192.168.123.12:5432, and of course I can launch Confluence in the browser via 192.168.123.12:8080.
FYI, I didn't even have to alter the pg_hba.conf file, I just used the official PostgreSQL image (latest) as it was, which is ideal.
Thanks very much to RSloeserwij for the suggestions... while none of them proved to be the solution I needed, they did put me on the right track in the Docker docs, which, after some reading, led me to understand a few things I didn't before and figure out the config magic I needed.

linux command to connect to another server using hostname and port number

what is the Linux command to connect to another server using host name and port number?
how to connect to another server using only host name and port number then check if an existing process is running? the only way i see it working is to log in to the server and run the PS command. but is there a way to do it without logging in directly to the other server and connect only with host name and port number and check the running process?
If you just want to try an arbitrary connection to a given host/port combination, you could try one nmap, telnet or nc (netcat).
Note that you can't necessarily determine whether or not a process is running remotely - it might be running on that port, but simply ignore anything it sees over the port. To really be sure, you will need to run ps or netstat or etc. via ssh or etc.
If you want to use SSH from e.g. a script or, more generally, without typing in login information, then you will want to use public key authentication. Ubuntu has some good documentation on how to set this up, and it's very much applicable to other distrobutions as well: https://help.ubuntu.com/community/SSH/OpenSSH/Keys.
If you have no access to the server you're trying to list processes on at all, then I'm afraid there isn't a way to list running processes remotely (besides remote tools like nmap and so on, as mentioned earlier - you can always probe public ports without authentication [although you might make people angry if you do this to servers you don't own]). This is a feature, not a problem.
telnet connects to most of services. With it you can ensure that port is open and see hello message (if any). Also nc is more low level.
eri#eri-macro ~ $ telnet smtp.yandex.ru 25
Trying 87.250.250.38...
Connected to smtp.yandex.ru.
Escape character is '^]'.
220 smtp16.mail.yandex.net ESMTP (Want to use Yandex.Mail for your domain? Visit http://pdd.yandex.ru)
helo
501 5.5.4 HELO requires domain address.
HELO ya.ru
250 smtp16.mail.yandex.net
MAILĀ FROM: <someusername#somecompany.ru>
502 5.5.2 Syntax error, command unrecognized.
If there is plain text protocol you cat talk with service by keyboard. If connection is secured try openssl.
openssl s_client -quiet -connect www.google.com:443
depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA
verify error:num=20:unable to get local issuer certificate
verify return:0
GET /
<HTML><HEAD>
If protocol is not known you may see much of hieroglyphs or just Connected to ... message.
Try this :
ssh <YOUR_HOST_NAME> 'ps auxwww'
Like Dark Falcon said in the comments, you need a protocol to communicate with the server, a port alone is useless in this case.
By default on unix (and unix like) servers, ssh is the way to go.
Remote Shell with this command. Example is cat a file on the remote machine.
rsh host port 'cat remotefile' >> localfile
host and port self explainitory
remotefile: name of some file on the machine remote logging to in home directory
localfile: name of file cat information to.
Use monitoring software (like Nagios). It looks at your processes, sensors, load and thatever you configured to watch. It continuously stores log. It alerts you by email\sms\jabber if something fails. You can access it with browser or by HTTP API.

Is it possible to connect to several servers and search for text in a log?

I have 8 web servers and 8 app servers on each of which I need to find a particular string of text.
Typically I would connect to the server through putty and type this below to search for that text:
grep "XMemcached network layout exception" /logs/handle1.log
Is there anyway I can automate this via a script, so it will automatically connect to those servers and search for those words? Is this possible?
Log in on one of the servers and do something like
for host in web1 web2 web3 [...] app1 app2 app3 [...]; do ssh $host grep "XMemcached network layout exception" /logs/handle1.log | sed "s/^/$host: /"; done
This is just a basic one-off script; if you find yourself doing that kind of things repeatedly, consider installing clusterssh.
It is possible. I do such tasks over ssh.
Setup a ssh authentication without passwords but with certificates. On your (monitoring) computer, where you start the grep command, you need to generate a ssh key with ssh-keygen. This will create a keypair for you in your ~/.ssh/ Folder. A private, secret one an another public one. You need to copy the public key to every target computer you want to search for logs. You can copy the file with scp to the target computer. Afterwards log in into the target computer and append the content of the public key file to the file ~/.ssh/authorized_keys.
Now you should be able to login to the target computer from your monitoring computer without a password.
Please verify this by trying to log on with ssh <target computer>.
If this works you can just cat the log file over the ssh connection to your local grep command as follows:
echo `ssh <user>#<target computer> "cat <log file>"` | grep <search term>
That's only one solution, that is very easy and quickly established.
With this solution you always stay on your local monitoring computer and only retrieve the log files you want to search through.
A more comfortable solution would probably to write a little bash script.
I personally like RabbitMQ to connect boxes together for things like this.
Then, using http://sphinxsearch.com/ to help on the searching side is helpful.
Personally, I would pipe all of the logs through rabbit to a central search server. If the search traffic is substantial, maybe using a few boxes.

Noob questions for SVN checkout and network issues regading it

We have a local server with SVN installed on it that we are using for development/testing purpouses. We would like to checkout the data from it to the live server that is somewhere out there.
The only way to do that which I thought of was to use "svn checkout" from the live server, right? This way we do not need to FTP the changes to it, that may cause problems is we forget to upload some of the changes. And if we found a problem we can alway go back to previous stable version, right? Correct me if I am wrong about any of these.
The problem is that our local server (Ubuntu) does not have a IP that is reachable from outside. We have a router from out ISP, but we can not use that to access the local server from the live. We are willing to ask the ISP provider to setup a second IP for the local server, but for security sake they want to setup a separate machine with Windows and windows base security software (firewall - http://www.kerio.com/control/ and antivirus) that will cost us a lot. Can we just setup a free firewall on the local server (Ubuntu as I said) and solve the problem without spending additional money?
I hope I was clear.
It's always hard to comment without knowing the exact situation, but this sounds a bit crazy.
What you would usually do is set up port forwarding for one port to the local server. The server would then be reachable (for example) through 123.45.67.89:3690
That's a three-minute task to set up in a normal household router.
As long as the Ubuntu server is closed otherwise, and Subversion or whatever you are using for authentication is properly configured and up to date, this should not create security issues.
In any case, putting a Windows machine in between to act as a firewall sounds really unnecessary. Ubuntu comes with everything necessary to secure the setup properly.
If the remote server has an ssh server, then you can use ssh forwarding.
From the internal svn server:
ssh -R 7711:localhost:3690 {REMOTE_SERVER}
7711 is an arbitrary port (you can use any free port on the remote system) that will be forwarded from the remote system to port 3690 (svn) on the svn server.
3690 is the port on the internal svn server that you want to talk to (via svn://).
If you are using subversion over http:// then use port 80 instead of 3690.
If you are using subversion over https:// then use port 443 instead of 3690.
After setting up the forward, then you can do this on the remote system:
svn checkout {SCHEME}://localhost:7711/{PATH}
{SCHEME} is svn, http, https, etc.
{PATH} is the normal svn path you want to check out.
Notes:
the forwarded traffic is tunneled through the ssh connection (on a different "channel") so it is also encrypted which is a nice benefit.
by default, the remote end of the forward will listen on the loopback interface so only processes on that system will be able to use the port forwarded port.
As soon as you close the ssh session, the forwarded port will also close. It only lasts the duration of the ssh connection.
ssh forwarding is very powerful. If you can ssh between two systems, then you can get around any sort of connection problem like this.
Do man ssh and read about the -L and -R options.
Useful links about ssh forwarding:
http://www.rzg.mpg.de/networkservices/ssh-tunnelling-port-forwarding:
http://www.walkernews.net/2007/07/21/how-to-setup-ssh-port-forwarding-in-3-minutes/
check if your ISP router provide some port forwarding abilities,
You should probably forward the ssh port ( after ensuring that everyone password is secure/or enforcing login with ssh keys file), and use SVN+SSH protocol to access your repository.
You should be able to open up and forward a single port (3690 by default) on your existing IP to the local server, as pointed out by Pekka. This depends on your router, and your ability to access the configuration interface on the router.
Instead of having to deal with SSH and worry about people trying to access your local server from anywhere, you could setup a firewall to only allow incoming traffic from your single remote server. Depending on the router setup, you could simply use the builtin firewall on the local server. It would still be advisable to have some svn authentication, though.
The SSH forwarding method described by kanaka prevents the entire issue about remote access to the local machine, but it requires you to execute the forwarding command from the local server every time you need to access svn on the remote server.

Resources