Is there any way to validate if Hazelcast's CPSubsystem (introduced in HC 3.12) is enabled by an HC member or client? If I try to access the CPSubsystem and it isn't enabled, HC will throw an exception. However, using exceptions for flow control is a bad practice, and I would rather check if it is enabled first prior to accessing it.
I haven't been able to find any mechanism to allow me to query its status without tripping the exception. Does such a method exist?
Unfortunately only way to figure out whether or not CPSubsystem is enabled to check CPSubsystemConfig.cpMemberCount > 0. But server configuration is not accessible on client, so this does not work on clients.
PS: I have submitted a new enhancement issue to a new API method: https://github.com/hazelcast/hazelcast/issues/15413
Related
I am running a Java app with DataStax Java driver version 3.3.0 and I am trying to change Cassandra user and password at runtime. According to this issue by calling setUser and setPassword in this object it should reuse it for future calls to the DB, so I did in my code something like
PlainTextAuthProvider authProvider = (PlainTextAuthProvider) cluster.getConfiguration().getProtocolOptions().getAuthProvider();
authProvider.setUsername(user);
authProvider.setPassword(pwd);
But even after this change I find that it still use the user/pass I set on startup.
This docu indicates that in versions > 4.x you can force a config reload at runtime.
So my question is, Is there a way to force all new calls to Cassandra start using the new User and Pass provided via PlainTextAuthProvider in version 3.x ?
Extra: I am doing this to try to rotate Vault passwords when they expire and update the driver accordingly. Something like what is described in this post for relational DB accessed by Spring Boot but on Cassandra
The question was also asked on https://community.datastax.com/questions/11846/ and I'm reposting my response here:
I'm not familiar with this API but I think the credentials are used just for new connections initiated after the credentials are set. I suspect it doesn't apply to existing connections which continue to use the old credentials.
I'm going to reach out to the Driver devs here at DataStax and will either get them to respond directly or I will update my answer. Cheers!
Kind of general question. I have a web app running on a private network, only to be used by associates. The application runs on three different servers, but for some reason, throws a view state error, whenever workload management causes an open session to jump to a different server. Changing the view state saving method to client did not fix the issue, but doing that in addition to disabling the myfaces core encryption did, which i believe affects save state encryption. Being that this application runs on a private network, I am wondering if it is ok to leave the view states un-encrypted? The forms and submits contain no sensitive data, however, there is a login. No register new user either, only the login, as the credentials are derived from a different source, and that is the only sensitive data. Guidance would be greatly appreciated!To be more, clear, I'm wondering if anyone can explain to me whether or not this is a safe thing to do, and give me a reason as to why it is or is not safe.
If you use MyFaces 2.3.x, you can safely deactivate encryption when using server state saving. Even in public. See: https://issues.apache.org/jira/browse/MYFACES-4133
However i would not turn it off for client side state saving.
I'm new to parse, and have a question about the security of data in the parse.
It really speeds up the development and gives you an off-the-shelf database to store all the data coming from your web/mobile app.
How we can enforce security in parse.
Can anyone please explain me ?
Basically, there are two levels of security. First, every connection to the Parse servers is made via SSL, every other way of trying to connect is automatically blocked by their servers. That's something you don't need to worry about, but it's good to know the connection is encrypted.
There is, however, something you can and should do, to ensure the integrity and security of your data. It's called ACL, and can be defined for every object. Here is the documentation of it. With ACL's you can define which user has read access on an object, and which user can write/edit/delete it. Additionally, you can define global ACLs via the databrowser.
Another thing worth mentioning is client enabled class creation. In development it is quite useful to have that enabled, because this means every time you call a new class which doesn't already exist, it's automatically created for you. Similarly, there is the possibility to disable the creation of new fields for a class. Also useful in development, these things should be disabled in production, unless you really have to have them enabled (which shouldn't be the case if your data structure is good enough).
Additionally, you should read this page by Parse, it's all about security and data integrity.
I am using TACACS+ to authenticate Linux users using pam_tacplus.so PAM module and it works without issues.
I have modified the pam_tacplus module to meet some of my custom requirements.
I know by default, TACACS+ does not have any means to support linux groups or access level control over linux bash commands, however, I was wondering is there any way that some information could be passed from TACACS+ server side to let the pam_tacplus.so module which can be used to allow/deny , or modify the user group on the fly [from pam module itself].
Example: If I could pass the priv-lvl number from server to the client and which could be used for some decision making at the PAM module.
PS: I would prefer a method which involved no modification at the server side [code], all modification should be done at Linux side ie pam_tacplus module.
Thanks for any help.
Eventually I got it working.
Issue 1:
The issue I faced was there is very few documentation available to configure TACACS+ server for a non CISCO device.
Issue 2:
The tac_plus version that I am using
tac_plus -v
tac_plus version F4.0.4.28
does not seem to support
service = shell protocol = ssh
option in tac_plus.conf file.
So eventually I used
service = system {
default attribute = permit
priv-lvl = 15
}
On the client side (pam_tacplus.so),
I sent the AVP service=system at authorization phase(pam_acct_mgmt), which forced the service to return priv-lvl defined at the configuration file, which I used to device privilege level of the user.
NOTE: In some documentations it is mentioned that service=system is not used anymore. So this option may not work with CISCO devices.
HTH
Depending on how you intend to implement this, PAM may be insufficient to meet your needs. The privilege level from TACACS+ isn't part of the 'authentication' step, but rather the 'authorization' step. If you're using pam_tacplus, then that authorization takes place as part of the 'account' (aka pam_acct_mgmt) step in PAM. Unfortunately, however, *nix systems don't give you a lot of ability to do fine grained control here -- you might be able to reject access based on invalid 'service', 'protocol', or even particulars such as 'host', or 'tty', but probably not much beyond that. (priv_lvl is part of the request, not response, and pam_tacplus always sends '0'.)
If you want to vary privileges on a *nix system, you probably want to work within that environments capabilities. My suggestion would be to grouping as a means of producing a sort of 'role-based' access controls. If you want these to exist on the TACACS+ server, then you'll want to introduce custom AVP that are meaningful, and then associate those with the user.
You'll likely need an NSS (name service switch) module to accomplish this -- by the time you get to PAM, OpenSSH, for example, will have already determined that your user is "bogus" and send along a similarly bogus password to the server. With an NSS module you can populate 'passwd' records for your users based on AVPs from the TACACS+ server. More details on NSS can be found in glibc's documentation for "Name Service Switch".
I've been asked to support 2 URLs for JMX access to our server:
A secure one (service:jmx:rmi://localhost/jndi/rmi://localhost:2020/jmxrmi)
An insecure one: (service:jmx:rmi://localhost/jndi/rmi://localhost:2020/insecure-jmxrmi)
The insecure one is primarily for demo purposes - no it won't be used during production.
I can create a custom ConnectorServer for /jmxrmi and provide an interceptor to use our security mechanism to verify credentials. If I just create a vanilla second ConnectorServer (no 'env' properties), however, using jconsole -debug to access it initially tries secure access, and puts up the dialog about that failing, then asking if I want to try it insecurely.
The docs I've read from Oracle/Sun indicate that I can disable password auth and SSL using a couple of command-line -D switches. But then does that not mess with the /jmxrmi secure access?
How do I support both secure and non-secure connections at the same time? Note that I don't need them using the same URL, of course.
Thanks!
This is a tough one. When you disable auth and SSL you do it per JVM.
The JMXRMP protocol can not distinguish between secured and non-secured connection. You either set up the security and it will be used or not. I think the best shot would be using a custom ConnectorServer and put up with the messages JConsole produces.