Change DataStax java driver User and Password in runtime - cassandra

I am running a Java app with DataStax Java driver version 3.3.0 and I am trying to change Cassandra user and password at runtime. According to this issue by calling setUser and setPassword in this object it should reuse it for future calls to the DB, so I did in my code something like
PlainTextAuthProvider authProvider = (PlainTextAuthProvider) cluster.getConfiguration().getProtocolOptions().getAuthProvider();
authProvider.setUsername(user);
authProvider.setPassword(pwd);
But even after this change I find that it still use the user/pass I set on startup.
This docu indicates that in versions > 4.x you can force a config reload at runtime.
So my question is, Is there a way to force all new calls to Cassandra start using the new User and Pass provided via PlainTextAuthProvider in version 3.x ?
Extra: I am doing this to try to rotate Vault passwords when they expire and update the driver accordingly. Something like what is described in this post for relational DB accessed by Spring Boot but on Cassandra

The question was also asked on https://community.datastax.com/questions/11846/ and I'm reposting my response here:
I'm not familiar with this API but I think the credentials are used just for new connections initiated after the credentials are set. I suspect it doesn't apply to existing connections which continue to use the old credentials.
I'm going to reach out to the Driver devs here at DataStax and will either get them to respond directly or I will update my answer. Cheers!

Related

Superset with Trino Impersonation and LDAP

I have a Trino cluster configured to use LDAP and I want to use Superset to connect to it.
The Trino cluster uses HTTPS with a self signed certificate
I managed to configure Superset to use LDAP, that's not the problem.
I also managed to query Trino by having the following configuration:
sqlalchemy URI: trino://myuser:mypassword#trino_server:8443,
security extra config: {"connect_args": {"verify": false}}
Now here's the problem: Under the security tab there's a checkbox that says "Impersonate logged in user (Presto, Trino, Hive and GSheets)"
. I checked the box, and still the queries I execute run with the user "myuser" which is configured in the sqlalchemy URI, instead of the logged in user.
I'm using Superset version 1.3.2
Does anybody know how to solve this?
There are two components to get user impersonation working with Trino and Superset:
A version of Superset that supports user impersonation with Trino.
This was added officially in 1.3.0, and since you're on 1.3.2 that shouldn't be a problem.
A Trino client that supports user impersonation.
AFAIK the only Python client that currently works with Superset to connect to Trino is sqlalchemy-trino. I couldn't find any specific changes made for user impersonation until 0.4.0, but I have gotten this working with the older 0.3.0 version.
There may be some other possibilities that could prevent user impersonation from working, but less likely:
Make sure that all containers have a working version of sqlalchemy-trino installed. This depends on how you add Python requirements, but I believe I've seen cases where Superset containers don't have the same dependencies, i.e. the superset_app container has the correct module, but not the superset_worker container.
Make sure that the HTTP headers in the requests going to Trino are not being modified. User impersonation works by authenticating with basic authentication but impersonating the user added in a HTTP header called 'X-Trino-User'. If the HTTP header is removed or changed, then the user impersonation won't work as expected.
just wanted to let you know that I managed to solve this issue.
The problem was that I put this configuration - {"connect_args": {"verify": false}} in the "SECURE EXTRA" section under the "Security" tab, instead of in the "ENGINE PARAMETERS" section under the "Other" tab.

Secure Elasticsearch installation retrospectively

I have an Elasticsearch installation (V7.3.2). Is it possible to secure this retrospectively? This link states that a password can only be set "during the initial configuration of the Elasticsearch". Basically, I require consumers of the restful API to provide a password (?) going forward.
The elastic bootstrap password is used to init the internal/reserved users used by the components or features of the elastic stack (kibana, logstash, beats, monitoring, ...).
If you want to secure the API, you need to create users/roles for your scenario on top.
Please use TLS in your cluster when handling with passwords and don't expose the cluster directly for security reasons.
Here are all informations regarding a secure cluster including some tutorials: https://www.elastic.co/guide/en/elasticsearch/reference/7.3/secure-cluster.html
EDIT: Added links as requested. Feel free to raise a new question here at SO if you're facing serious problems!
Here you can find a complete guide to install and secure ElasticSearch.
Basically the bootstrap password is used initially to setup the built-in ElasticSearch users (like "elastic", "kibana"). Once this is done, you won't be able access ElasticSearch anonymously but only with one of the built in users, e.g. "elastic".
Then you can use "elastic" user to create additional users (with their own password) and roles (e.g. to asses specific indexes only in read-only mode).
As #ibexit wrote it's highly recommended to secure your cluster and don't expose it directly (use a proxy server, secured with SSL).

How to provide read backward compatibility after enabling role-based authentication in cassandra?

We are going to change cassandra setting from authenticator: AllowAllAuthentication to authenticator: PasswordAuthenticator
to enable role-based authentication. There will be two roles:
admin which is a superuser
read-only which is only allowed to read.
I would like to provide backward compatibility for users of the cassandra cluster. More specifically,
many users use
shell script that uses cqlsh
python cassandra package
php cassandra package
to only read data from cassandra. Currently they don't specify any username or password. Therefore
I would like to make read-only role some sort of a "default" role, i.e. if no username and password provided,
then the role is automatically set to read-only so the users can read data and thus clients don't need to change their code.
Is there a way to do this? I'm currently having trouble in the following two parts:
the default user is cassandra if there is no role / user specified in cqlsh. I did not find a way to set default user / role.
and for the default user cassandra, I still have to set a password for it.
Any suggestions would be appreciated! Thanks in advance.
I come from an oracle background, were I've done "sqlplus "/as sysdba"" for years. I like it because the O/S authenticates me. Now, there is something similar in Cassandra, but it isn't secure. Basically in your home directory there is a subdirectory called ".cassandra" (hidden). In that directory there is a file (if there isn't, create one) called "cqlshrc" (so ~/.cassandra/cqlshrc). That file you can add authentication information that will allow someone to log in by simply typing "cqlsh" without anything else (unless you're doing remote where you need "host" and "port"). The cqlshrc file has, among other things an authentication section that looks like this:
[authentication]
username = <your_user_name>
password = <your_password>
So you could simply put your desired username and password in that file and you're essentially able to connect without supplying your username and password (You could also run "cqlsh -u your_user_name" and it will find your password in your cqlshrc file as well).
You can see a few obvious issues here:
1) The password is in clear text
2) If you change the password you need to change the password in the cqlshrc file
I do not recommend you use the "cassandra" user for ANYTHING. In fact, I'd drop it. The reason is because the cassandra user does everything with CL=quorum. We found this out when investigating huge I/O requests coming from OpsCenter and our backup tool (as you can see, we use DSE). They were all using cassandra and pounding on the node(s) that had the cassandra authentication information. It's baked into the code apparently to have CL=quorum - kinda dumb. Anyway, the above is one way to have users log in with a specific user and not provide credentials making it pretty easy to switch.
Hope that helps
-Jim

How to determine if Hazelcast CPSubsystem is enabled?

Is there any way to validate if Hazelcast's CPSubsystem (introduced in HC 3.12) is enabled by an HC member or client? If I try to access the CPSubsystem and it isn't enabled, HC will throw an exception. However, using exceptions for flow control is a bad practice, and I would rather check if it is enabled first prior to accessing it.
I haven't been able to find any mechanism to allow me to query its status without tripping the exception. Does such a method exist?
Unfortunately only way to figure out whether or not CPSubsystem is enabled to check CPSubsystemConfig.cpMemberCount > 0. But server configuration is not accessible on client, so this does not work on clients.
PS: I have submitted a new enhancement issue to a new API method: https://github.com/hazelcast/hazelcast/issues/15413

Database Connection security in nodejs

When I'm connecting to database in node, I have to add db name, username, password etc. If I'm right every user can access js file, when he knows address. So... how it works? Is it safe?
Node.js server side source files should never be accessible to end-users.
In frameworks like Express the convention is that requests for static assets are handled by the static middleware which serves files only from a specific folder in your solution. Explicit requests for other source files that exists in your code base are thus ignored (404 is passed down the pipeline).
Consult
https://expressjs.com/en/starter/static-files.html
for more details.
Although there are other possible options to further limit the visibility of sensitive data, note that anyone on admin rights who gets the access to your server, would of course be able to retrieve the data (and this is perfectly acceptable).
I am assuming from the question that the DB and Node are on the same server. I am also assuming you have created either a JSON or env file or a function which picks up your DB parameters.
The one server = everything (code+DB) is not the best setup in the world. However, if you are limited to it, then it depends on the DB you are using. Mongo Community Edition will allow you to set up limited security protocols, such as creating users within the DB itself. This contains a {username password rights} combination which grants scaled rights based upon the type of user you set up. This is not foolproof but it is something of protection even if someone gets a hold of your DB parameters. If you are using a more extended version of MongoDB then this question would be superfluous. As to other DB's you need to consult the documentation.
However, all that being said, you should really have a DB set up behind a public server and only allow SSH into it, with an open port to receive information from your program. As the one server = everthing format is not safe in the end run, though it is fine for development.
If you are using MongoDB, you may want to take a look at Mongoose coupled with Mongoose Encryption. I personally do not use them but it may solve your problem in the short run.
If your DB is MySQL etc. then I suggest you look at the documentation.

Resources