postgresql.conf and hba.conf in YugabyteDB YSQL? - yugabytedb

[Question posted by a user on YugabyteDB Community Slack]
When I set up YugaByte I don't remember editing ysql_hba.conf or postgresql.conf - so I didn't set it up to allow connections from everywhere to its port 5433. But still, I am able to access it. That's not the case with regular Postgres. So how does this work? That said I am not able to connect to YB running on the same machine using ysqlsh, so looks like postgresql.conf does matter.

The relevant tserver flags are ysql_pg_conf_csv and ysql_hba_conf_csv that generate ysql_pg.conf and ysql_hba.conf. See docs: yugabyte.com/preview/secure
hba.conf is the one that determines authentication. For example, with hba.conf, you can specify which IPs can authenticate with which methods.

Related

Change broadcast_address of existing YugabyteDB installation

[Question posted by a user on YugabyteDB Community Slack]
I have a single node Yugabyte 2.12.3 instance setup to use the public server address.
When I try to change services to bind to localhost I can't properly start the master service, I got an error:
UNKNOWN_ROLE
ERROR: Network error (yb/util/net/socket.cc:551):
Unable to get registration information for peer ([10.20.12.246:7100]) id (fad4f3b477364900a15679cd954bf6b5): recvmsg error: Connection refused (system error 111)
What is needed to adjust to starting master service normally?
Does this localhost master service have info about the previous node setup and try contacting that master node?
I don’t think we support changing the bind/broadcast address, after the fact. We persist address information, in various structures, across servers, eg: for each tablet, we store the IP in each raft group.

Share Redis Db among many users

If I store some data on my redis cache on my machine. Then is that data accessible to other people on their machines or the redis db is limited to one user only.
For sure you can share your Redis database with as many users as you want, provided you open up the network to your endpoint and port.
Note there is no Access Control List in Redis before version 6. Redis 6 is just out as release candidate 1 a few days ago. If you require ACL you may consider it if you are ok working with an RC1.
You can configure a Redis password, but it is one password for all users - shared. You can ask clients to also identify themselves by providing a name, but it is an honor system. Again, there is no ACL before Redis 6.
You can also use the firewall (network security) to limit what machines can connect to your instance.
Take a look at https://redis.io/topics/security for more on security.
To learn about Redis 6 ACL see https://redis.io/topics/acl

Confluence in Docker can't see PostgreSQL in Docker

I'm trying to set up both Confluence and PostgreSQL in Docker. I've got them both up and running on my fully up to date CentOS 6 machine, with volume-mapping to the host file system so I can back them up easily. I can connect to PostgreSQL using pgAdmin from another machine just fine, and I can get into Confluence from a browser from that same machine. So, basically, both apps seem to be running as expected inside their respective containers and are accessible to the outside world, which of course eliminates a whole bunch of possibilities for my issue.
And that issue is that Confluence can't talk to PostgreSQL during initial setup, which is necessary for it to function. I'm getting connection failed errors (to be specific: "Can't reach database server or port : SQLState - 08001 org.postgresql.util.PSQLException: The connection attempt failed").
PostgreSQL is using the default 5432 port, which of course is exposed, otherwise I wouldn't be able to connect to it via pgAdmin, and of course I know the ID/password I'm trying is correct for the same reason (and besides, if it was an auth problem I wouldn't expect to see this error message). When I try to configure the database connection during Confluence's initial setup, I specify the IP address of the host machine, just like from pgAdmin on the other machine, but that doesn't work. I also tried some things that I basically knew wouldn't work (0.0.0.0, 127.0.0.1 and localhost).
I'm not sure what I need to do to make this work. Is there maybe some special method to specify the IP to a container from the same host machine, some nomenclature I'm not aware of?
At this point, I'm "okay" with Docker in terms of basic operations, but I'm far from an expert, so I'm a bit lost. I'm also not a big-time *nix user generally, though I can usually fumble my way through most things... but any hints would be greatly appreciated because I'm at a loss right now otherwise.
Thanks,
Frank
EDIT 1: As requested by someone below, here's my pg_hba.conf file, minus comments:
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
local replication all trust
host replication all 127.0.0.1/32 trust
host replication all ::1/128 trust
host all all all md5
try changing the second line of the pg_hba.conf file to the following:
host all all 0.0.0.0/32 trust
this will cause PostgreSQL to start accepting calls from any source address. Since a docker container is technically not operating on localhost but on its own ip, the current configuration causes PostgreSQL to block any connections to it.
Also check if confluence is searching for the database on localhost. If that is the case change that to the ip of the hostmachine within the docker network.
Success! The solution was to create a custom network and then use the image name in the connection string to PostreSQL container from Confluence container. In other words, I ran this:
docker network create -d bridge docker-net
Then, on both of the docker run commands for the PostgreSQL and Confluence containers, I added:
--network=docker-net
That way, when I ran through the Confluence configuration wizard, when it asked for the hostname for the PostgreSQL server, I used postgres (the name I gave the container) rather than an IP address or actual hostname. Docker makes that work thanks to the custom network. This also leaves the containers available via the IP of the host machine, so for example I can still connect to PostgreSQL via 192.168.123.12:5432, and of course I can launch Confluence in the browser via 192.168.123.12:8080.
FYI, I didn't even have to alter the pg_hba.conf file, I just used the official PostgreSQL image (latest) as it was, which is ideal.
Thanks very much to RSloeserwij for the suggestions... while none of them proved to be the solution I needed, they did put me on the right track in the Docker docs, which, after some reading, led me to understand a few things I didn't before and figure out the config magic I needed.

Secure MongoDB Over Public Internet

With the recent attacks on Mongo databases, I've seen many guides on how to password-protect your database. I've gone through each guide and I've set up a 'superAdmin' with root role and another basicAdmin with read/write privileges. I reboot mongo using
mongo --auth
and authenticate using my superAdmin login, however this causes problems for my site which uses this db. When I boot my Node app, I can't access any pages as it cannot connect to the database because it has auth enabled. If in my config/database.js file I have:
module.exports = {
'database': 'mongodb://myWebsite.com/myDatabase'
};
How can I allow my site to access my MongoDB and read/write as users signup but also restrict any ransomware group from just walking in and dropping every collection over and over?
There are three main methods that you can use to protect your database.
Username and password
This is the simpler one. As you have mentioned that you have already secured the server using password, you can simply connect to database using mongoose as
mongoose.connect('mongodb://username:password#host:port/database');
I might recommend here that you change the default port of mongoDb to something else. Changing port can be found in file /etc/mongodb.conf.
Bind to private ip and use firewall
Again refering to file /etc/mongodb.conf change bind_ip to local ip of your network. Most of the services do provide that. Also better to setup firewall for the same. Simple firewall that you can use is UFW. Only allow traffic from servers that you are using. This method might not be effective if you are using shared vpn service.
SSH tunnel to access database
This is far most the most reliable method and i would recommend you to use this with the last method. Here is how this works. Set bind_ip to 127.0.0.1. Let us assume that you are running port on 5000. In order to set up a tunnel use
ssh \
-L 4000:localhost:5000 \
-i ~/.ssh/key \
<username>#mongo_db_ip
Remember to add your ssh key in instance running mongodb database. The above command should be issued on server that is running nodejs. 5000 as mentioned is the remove port and 4000 is the local port that you need to connect to on mongodb. So your command in mongoose to connect to database would be
mongoose.connect('mongodb://localhost:4000/<database>');
Summary
As you can see in almost all the steps i have focused on setting up a firewall which is very important.
Also username and passwords should be avoided and it is better to use ssh keys. From practical experience they also reduce a lot of burden while you are scaling up your service.
Hope this helps you out.

Securing zookeeper, where to start?

I feel lost trying to figure out what my options are. Apache's programmers guide and administrators guide do not detail anything substantial. My O'Reilly Zookeeper book barely talks about security... did I miss something? I was hoping to find tutorials through google about authenticating client connections, authorizing actions, and encrypting messages sent between zookeepers and client.
I had a lot of trouble but I figured it out and the links at the bottom where a huge help to me.
This code (using Curator) was something hard to figure out:
List<ACL> myAclList = new ArrayList<ACL>();
aclList.add(new ACL(ZooDefs.Perms.ALL, ZooDefs.Ids.AUTH_IDS));
client.create(withACL(myAclList)).forPath(myPath);
If I setup the zookeeper configuration correctly, then it will enforce that only the AUTH_IDS will be allowed to access my ZNode.
Ofiicial documentation, My mailing list Q1, My mailing list Q2, JIRA that I found useful, but some items are out of date
Since zookeeper version 3.5.4-beta, you are able to enable using client certificates to secure communication to a remote zookeeper server:
Client
ZooKeeper client can use Netty by setting Java system property:
zookeeper.clientCnxnSocket="org.apache.zookeeper.ClientCnxnSocketNetty"
In order to do secure communication on client, set this Java system property:
zookeeper.client.secure=true
Note that with "secure" property set the client could and should only connect to server’s “secureClientPort” which will be described shortly.
Then set up keystore and truststore environment by setting the following Java system properties:
zookeeper.ssl.keyStore.location="/path/to/your/keystore"
zookeeper.ssl.keyStore.password="keystore_password"
zookeeper.ssl.trustStore.location="/path/to/your/truststore"
zookeeper.ssl.trustStore.password="truststore_password"
Server
ZooKeeper server can use Netty by setting this Java system property:
zookeeper.serverCnxnFactory="org.apache.zookeeper.server.NettyServerCnxnFactory"
ZooKeeper server also needs to provide a listening port to accept secure client connections. This port is different from and running in parallel with the known “clientPort”. It should be added in “zoo.cfg”:
secureClientPort=2281
All secure clients (mentioned above) should connect to this port.
Then set up keystore and truststore environment like what client does.
More info here:
https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide

Resources