YugabyteDB YSQL inter-node compression - yugabytedb

[Question posted by a user on YugabyteDB Community Slack]
Question about YB and compression.
We want to use the ysql connector, does it support SSL compression like vanilla PostgreSQL?
Postgres allows compression using OpenSSL zlib, some DB vendors block this (RDS) I was wandering if it's supported by YB?
Moving to YB will introduce new traffic costs for inter-node communication that we don't face at the moment.
I was thinking of ssl compression as a workaround, but it will probably limit our ability to migrate.

From the PostgreSQL docs:
SSL compression is nowadays considered insecure and its use is no
longer recommended. OpenSSL 1.1.0 disables compression by default, and
many operating system distributions disable it in prior versions as
well, so setting this parameter to on will not have any effect if the
server does not accept compression.
If security is not a primary concern, compression can improve throughput if the network is the bottleneck. Disabling compression can improve response time and
throughput if CPU performance is the limiting factor.
PostgreSQL 14 disables compression completely in the backend.
Usually, the bottleneck in our case is the CPU, so it probably won't help. And I think compression is done AFTER encryption so it won't help a lot.
Inter-node compression is supported and enabled by default: https://docs.yugabyte.com/preview/reference/configuration/yb-tserver/#network-compression

Related

YugabyteDB working under unreliable internet connection?

[Question posted by a user on YugabyteDB Community Slack]
I am interested in Yugabyte’s ability to be geo-distributed. However, I am wondering if the protocol between the different nodes can tolerate DIL conditions (see: Disconnected, Intermittent and Limited (DIL) [DIDO Wiki] ) where the network is not reliable. Are there timeouts? Can they change? Are there protocols/defaults for collisions?
where the network is not reliable
YugabyteDB and its geo-distribution and replication are created to be able to unexpected issues.
However, it is not a good idea to explicitly create a situation where the connection is not stable, that is simply not what it is created for.
Are there timeouts? Can they change?
See yb-master configuration reference | YugabyteDB Docs.
Are there protocols/defaults for collisions?
The default protocol for collisions is last write wins: xCluster replication | YugabyteDB Docs

Cassandra reducing perfomance when enabling authorization

i have a 6 nodes cassandra cluster, and i want to enable authorization/authentication on it, but i read a few comments of those who administer cassandra and they said that enabling authorization on cassandra reduces performance, is it really so?
who has experienced this and how to avoid it
Just my experience here, and it is not meant to discount the experience of others. Since 2012, I have personally built over 200 Apache Cassandra clusters on infra ranging from bare metal, to K8s, to the public clouds; spanning environments from Dev, Stage, Test, and (of course) Production.
Every single one of those clusters (even Dev) had Authorization and Authentication enabled. Some of them also had SSL enabled.
My team was also occasionally asked to assume management of clusters run directly by an application team. Some of those did not have auth enabled. Thus verifying/enabling auth was one of the first tasks that we performed. Latency incurred by activating authentication was often a voiced concern.
That being said, at no point was enabling Cassandra's native auth deemed to be disruptive. In fact, one of the prod clusters with both auth & SSL enabled would routinely post a P95 read latency of less than 5ms, while supporting throughput of up to 250k ops/sec.
In fact, the only time it was ever an issue was when we integrated a few clusters with a 3rd party plugin for LDAP. But Cassandra's own Authentication and Authorization never posed a noticeable issue.
If you find that enabling auth does cause latency, the main tuneable in the cassandra.yaml is credentials_validity_in_ms. It defaults to 2000ms (2 seconds), and represents how often a long running connection refreshes its cached credentials. I've heard of some folks setting that as high as 3 hours (which I think is too high). But if it becomes problematic, increasing that setting should help.

How can I disable SSL3 on DB2?

Is there away to make DB2 not accept SSL3?
I'm trying to secure couple of DB2 databases I have on couple of servers against the POODLE attack. I know you can do this through the Operating System itself, but my question is if I don't have control over the OS can I at least make DB2 stop using use SSL3?
I have many Java applications and some of it might be using SSL3, I want to be sure these application will fail when they try to use the SSL3 to connect these DB2 databases.
Starting with DB2 LUW 9.7 (I'm assuming you mean LUW here...), you can specify which implementation of SSL you want to use when doing the handshake. It looks like (at least since they implemented this configuration option) DB2 has only ever supported TLS. The configuration option is called ssl_version.
Additionally, you can specify which ciphers you wish to use with the ssl_cipherspecs configuration option. The default is to allow DB2 and the client to negotiate the strongest cipher they both understand.

cache distribute framework for PHP

Currently, looking for cache distribution framework for my php implementation.
mainly, for local or remote cache storage.
i have some idea about "Memcache" & "Apache Cassandra".
is there any more well framework ?
thanks
javaamtho
You should consider Couchbase, as it provides a distributed persistent cache that is quite performant and super easy to use. The problem with Memcached is that it's harder to scale to additional machines, and if a machine goes down you lose all those keys and have to rebuild the cache. Cassandra also has excellent caching support but is quite a bit more complex; if you don't need the complexity Couchbase is probably a better choice.

MongoDB single server security and reliability for production environment?

I want to set up MongoDb on a single server, and I've been searching around to make sure I do it right. I have gleaned a few basics on security so far:
Enable authentication (http://www.mongodb.org/display/DOCS/Security+and+Authentication - not enabled by default?)
Only allow localhost connections
In PHP, be sure to cast GET and POST parameters to strings to avoid injection attacks (http://www.php.net/manual/en/mongo.security.php)
I've also picked up one thing about reliability.
You used to have use sharding on multiple boxes, but now you can just enable Journaling? (http://stackoverflow.com/questions/3487456/mongodb-are-reliability-issues-significant-still)
Is that the end of the story? Enable Authentication and Journaling and you are good to go a single server?
Thanks!
If you are running on a single server, then you should definitely have journaling enabled. On 2.0, this is the default for 64-bit builds; on 32-bit builds or older releases (the 1.8.x series) you can enable it with the --journal command-line flag or config file option. Be aware that using journaling will cause MongoDB to use double the memory it normally would, which is mostly an issue on 32-bit machines (since memory there is constrained to around 2GB ordinarily, with journaling it would be effectively halved).
Authentication can help, but the best security measures are to ensure that only machines you control can talk to MongoDB. You can do this with the --bind_ip command-line flag or config file option. You should also set up a firewall (iptables or similar) as an extra measure.
As for injection attacks, you should mostly be safe, so long as you don't blindly convert JSON (or similar structures) into PHP assocs and pass them directly to the MongoDB methods. If you construct the assoc yourself, by processing the $POST or $GET values, you should be safe.

Resources