I am using Apache Ignite 2.8.0. When persistence is enabled then my server becomes Inactive. When I am trying to activate the cluster (only one server) by control.bat --activate, it asks username and password, but when I activate using code ignite.cluster().active(true); it doesn't ask.
I need an explanation why it doesn't ask username and password when I activating the cluster by code?
You can only do that from a node that's already a part of topology (obviously) and thus passed security checks.
Apache Ignite only has thin client authentication currently. If you're looking for server-server authentication, use SSL or check GridGain security plugin.
Related
I have a standalone spark cluster on Kubernetes and I want to use that to load some temp views in memory and expose them via JDBC using spark thrift server.
I already got it working with no security by submitting a spark job (pyspark in my case) and starting thrift server in this same job so I can access the temp views.
Since I'll need to expose some sensitive data, I want to apply at least an authentication mechanism.
I've been reading a lot and I see basically 2 methods to do so:
PAM - which is not advised for production since some critical files needs to have grant permission to user beside root.
Kerberos - which appears to be the most appropriate one for this situation.
My question is:
- For a standalone spark cluster (running on K8s) is Kerberos the best approach? If not which one?
- If Kerberos is the best one, it's really hard to find some guidance or step by step on how to setup Kerberos to work with spark thrift server specially in my case where I'm not using any specific distribution (MapR, Hortonworks, etc).
Appreciate your help
Anyone having some insights regarding achieving fault tolerance in Apache Livy. Say for instance the Livy server fails how we can achieve HA.
Actually, using multiple Livy servers behind a load balancer doesn't work right now due to this bug: https://issues.apache.org/jira/browse/LIVY-541
In contrast, for deployments that require high availability, Livy supports session recovery using Zookeeper, which ensures that a Spark cluster remains available if the Livy server fails. After a restart, the Livy server can connect to existing sessions and roll back to the state before failing.
If you want Livy sessions to persis restart then just set these properties in livy.conf
livy.server.recovery.mode = recovery
livy.server.recovery.state-store = filesystem
livy.server.recovery.state-store.url = file:///home/livy
You can use hdfs:// as well for store.
we are trying to add a node to the existing ring where in security is enabled and default cassandra user is made nonsuper. Also, alerted keyspace to networktopology with replication = no.of nodes. The ring is currently on AWS.
Once the new node joins the cluster, only user we see is nonsuper cassandra user. we are pretty much lokced out of the cluster. However, once we remove the newly joined node, all the security that we had before comes back.
Are there any best practices that we need to follow to enable security in 3.9?
Thanks in advance for helping me out on this.!!
The datastax has restricted the use of opscenter for enterprise users only. Is there any way or possibility that even an open source casandra user can have access to the opscenter? Kindly check the follwing image and let me know if there is any possibility of the usage or opscenter?
I am trying to connect it but gettng the following error:
No. However you could consider creating a monitor via the metrics exposed by JMX instead.
I want to run a spark example on my mesos master node and it gaves me this problem. It just 'stop' here without showing any results or exceptions.
I0908 18:38:01.636055 9044 sched.cpp:226] Version: 1.0.1 I0908
18:38:01.636485 28512 sched.cpp:330] New master detected at
master#124.216.0.14:5050 I0908 18:38:01.636642 28512 sched.cpp:341] No
credentials provided. Attempting to register without authentication
There's not enough info there to debug, unfortunately.
New master detected at master#124.216.0.14:5050
This is the normal debug message during the Spark startup process or after Mesos master re-election. It discovered a Mesos master it didn't know about and acted accordingly.
No credentials provided. Attempting to register without authentication
This line is just a control flow debug message, a red herring that looks like an error but is really harmless in most cases.
It indicates that Mesos authentication is being skipped when registering with as a framework. Optional framework/service authentication was added for compatibility with the new Mesosphere Enterprise DC/OS which allows strict, permissive, or disabled modes of security when using the Mesos API. If you're using open source DC/OS or just plain Mesos, it's probably functioning in disabled security mode, because it doesn't have the authorization or service account infrastructure.