OpsCenter Community, keep data on different cluster - cassandra

Trying to setup OpsCenter free keep data on different cluster, but getting error:
WARN: Unable to find a matching cluster for node with IP [u'x.x.x.1']; the message was {u'os-load': 0.35}. This usually indicates that an OpsCenter agent is still running on an old node that was decommissioned or is part of a cluster that OpsCenter is no longer monitoring.
Same error for second node in cluster :(
But, if I set [dse].enterprise_override = true in cluster config -- everything works fine.
My config is:
user#casnode1:~/opscenter/conf/clusters# cat ClusterTest.conf
[jmx]
username =
password =
port = 7199
[kerberos_client_principals]
[kerberos]
[agents]
[kerberos_hostnames]
[kerberos_services]
[storage_cassandra]
seed_hosts = x.x.x.2
api_port = 9160
connect_timeout = 6.0
bind_interface =
connection_pool_size = 5
username =
password =
send_thrift_rpc = True
keyspace = OpsCenter2
[cassandra]
username =
seed_hosts = x.x.x.1, x.x.x.4
api_port = 9160
password =
So, the question is: Is it possible in OpsCenter Community setup different cluster to keep opscenter data?
OpsCenter version is 4.0.3

Is it possible in OpsCenter Community setup different cluster to keep opscenter data?
It is not. Storing data on a separate cluster is only supported on DataStax Enterprise clusters.
Note: Using the override you mentioned without permission from DataStax is a violation of the OpsCenter license agreement, and will not be supported.

Related

How to connect to Cassandra Database using Python code

I had followed the steps given in https://docs.datastax.com/en/developer/python-driver/3.25/getting_started/ to connect to cassandra database using python code, but still after running the code snippet I am getting
NoHostAvailable: ('Unable to connect to any servers', {'hosts"port': OperationTimedOut('errors=None, last_host=None'),
Python version 2.7 and 3 (classpath is set for both the python versions)
Java 1.8 (class path has been set)
Apache cassandra 3.11.6 (apache home classpath has been set)
I tend to use a very simple app to test connectivity to a Cassandra cluster:
from cassandra.cluster import Cluster
cluster = Cluster(['10.1.2.3'], port=45678)
session = cluster.connect()
row = session.execute("SELECT release_version FROM system.local").one()
if row:
print(row[0])
Then run it:
$ python HelloCassandra.py
4.0.6
In your comment you mentioned that you're getting OperationTimedOut which indicates that the driver never got a response back from the node within the client timeout period. This usually means (a) you're connecting to the wrong IP, (b) you're connecting to the wrong CQL port, or (c) there's a network connectivity issue between your app and the cluster.
Make sure that you're using the IP address that you've set in rpc_address of cassandra.yaml. Also make sure that the node is listening for CQL clients on the right port. You can easily verify this by checking the output of either of these Linux utilities like netstat or lsof, for example:
$ sudo lsof -nPi -sTCP:LISTEN
Cheers!
So that error message suggests that the host/port combination either does not have Cassandra running on it or is under heavy load and unable to respond.
Can you edit your question to include the Cassandra connection portion of your code, as well as maybe how you're calling it? I have a test script which I use (and you're welcome to check it out), and here is the connection portion:
protocol=4
hostname=sys.argv[1]
username=sys.argv[2]
password=sys.argv[3]
nodes = []
nodes.append(hostname)
auth_provider = PlainTextAuthProvider(username=username, password=password)
cluster = Cluster(nodes,auth_provider=auth_provider, protocol_version=protocol)
session = cluster.connect()
I call it like this:
$ python3 testCassandra.py 127.0.0.1 aaron notReallyMyPassword
local
One thing you might try too, would be to run a nodetool status on the cluster just to make sure it's running ok.
Edit
local variable 'session' referenced before assignment
So this sounds to me like you're attempting a session.execute before session = cluster.connect(). Have a look at my Git repo (linked above) to see the correct order for instantiating session.
I am not using default port
In that case, make sure the port is being set in the cluster definition. Ex:
port = 19099
cluster = Cluster(nodes,auth_provider=auth_provider, port=port)

Data transfer between two kerberos secured cluster

I am trying to transfer data between two secured kerberos. Cluster. I am facing issue that I have no config change access to source cluster I need to change everything on destination cluster. Is any way that I can setup trust realm between both the cluster without edit any config on source cluster.
If you are using distcp, then you will have to make sure both the clusters KDC know each other, by editing krb5.conf to add [realms] and [domain_realms] on each cluster to know about the other cluster as follows:
[realms]
<CLUSTER2_REALM> = {
kdc = <cluster2_server_kdc_host>:88
admin_server = <cluster2_server_kdc_host>:749
default_domain = <cluster2_host>
}
[domain_realm]
Clustre2_NN1 = CLUSTER2_REALM
Cluster2_NN2= CLUSTER2_REALM
Similarly on cluster2 as well, with CLUSTER1 details.
Then you need to create principals on both the clusters
addprinc -e "aes128-cts-hmac-sha1-96:normal aes256-cts-hmac-sha1-96:normal" krbtgt/<CLUSTER1_REASLM>#<CLUSTER2_REALMS>
modprinc -maxrenewlife <n>day krbtgt/<CLUSTER1_REALM>#<CLUSTER2_REALM>
Below properties needs to be set for hadoop.security.auth_to_local
In Cluster1:
RULE:[1:$1#$0](.*#\Q<CLUSTER2_REALM>\E$)s/#\Q<CLUSTER2_REALM>\E$//
RULE:[2:$1#$0](.*#\Q<CLUSTER2_REALM>\E$)s/#\Q<CLUSTER2_REALM>\E$//
In Cluster2:
RULE:[1:$1#$0](.*#\Q<CLUSTER1_REALM>\E$)s/#\Q<CLUSTER1_REALM>\E$//
RULE:[2:$1#$0](.*#\Q<CLUSTER1_REALM>\E$)s/#\Q<CLUSTER1_REALM>\E$//
Restart kdc
/etc/init.d/krb5kdc stop
/etc/init.d/kadmin stop
/etc/init.d/krb5kdc start
/etc/init.d/kadmin start
Failover or Restart Namenodes

Cassandra Connection with Groovy Script In SoapUI

thanks for the time. I am trying to access a remote Cassandra DB in order to complete my assertions. I see that the Server is running:
Cassandra V 3.0.8.1293
Driver Type: Cassandra CQL
Datastax Java Driver for Apache Cassandra - Core [3.0.5]
So, I am trying with the following simple code to access the DB
import com.datastax.driver.core.*
Cluster cluster = null;
try {
cluster = Cluster.builder().addContactPoint("x.x.x.x").withCredentials("xxxxxxx", "xxxxxx").withPort(9042).build()
Session session = cluster.connect();
ResultSet rs = session.execute("select * from TABLE");
Row row = rs.one();
} finally {
if (cluster != null) cluster.close();
}
when I use the cassandra-driver-core-2.0.1.jar I am getting the error :
ERROR:com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /x.x.x.x(null))
Read the documentation and a lot of posts here and on other blogs and I saw that there may be an incompatibility with the driver version so I tried to upgrade the driver to many versions (cassandra-driver-core-2.5,cassandra-driver-core-3,cassandra-driver-core-3.2), but on that I am getting the following:
ERROR:java.lang.ExceptionInInitializerError
Have also tried to connect using JDBC, but to no avail, using the configuration proposed at this thread
SoapUI JDBC connection with Apache Cassandra
Actually I am running out of ideas. Can anyone propose or point to some direction on how to actually achieve this, either by pointing me to some tutorial or any idea.
Thank you very much
I think you haven't enable remote access to cassandra.
Try enabling remote access using below configuration -
File Path /etc/cassandra/default.conf/cassandra.yaml
rpc_address: 0.0.0.0
broadcast_rpc_address: <serverIPAddress>
After that, restart cassandra service.

Lagom external Cassandra authentication

I have been trying to set up an external Cassandra for my Lagom setup.
In root pom I have written
<configuration>
<unmanagedServices>
<cas_native>http://ip:9042</cas_native>
</unmanagedServices>
<cassandraEnabled>false</cassandraEnabled>
</configuration>
In my impl application.conf
akka {
persistent {
journal {
akka.persistence.journal.plugin = "this-cassandra-journal"
this-cassandra-journal {
contact-points = ["10.15.2.179"]
port = 9042
cluster-id = "cas_native"
keyspace = "hello"
authentication.username = "cassandra"
authentication.password = "rodney"
# Parameter indicating whether the journal keyspace should be auto created
keyspace-autocreate = true
# Parameter indicating whether the journal tables should be auto created
tables-autocreate = true
}
}
snapshot-store {
akka.persistence.snapshot-store.plugin = "this-cassandra-snapshot-store"
this-cassandra-snapshot-store {
contact-points = ["10.15.2.179"]
port = 9042
cluster-id = "cas_native"
keyspace = "hello_snap"
authentication.username = "cassandra"
authentication.password = "rodney"
# Parameter indicating whether the journal keyspace should be auto created
keyspace-autocreate = true
# Parameter indicating whether the journal tables should be auto created
tables-autocreate = true
}
}
}
But I get the error
[warn] a.p.c.j.CassandraJournal - Failed to connect to Cassandra and initialize.
It will be retried on demand. Caused by: Authentication error on host /10.15.2.
179:9042: Host /10.15.2.179:9042 requires authentication, but no authenticator f
ound in Cluster configuration
[warn] a.p.c.s.CassandraSnapshotStore - Failed to connect to Cassandra and initi
alize. It will be retried on demand. Caused by: Authentication error on host /10
.15.2.179:9042: Host /10.15.2.179:9042 requires authentication, but no authentic
ator found in Cluster configuration
[warn] a.p.c.j.CassandraJournal - Failed to connect to Cassandra and initialize.
It will be retried on demand. Caused by: Authentication error on host /10.15.2.
179:9042: Host /10.15.2.179:9042 requires authentication, but no authenticator f
ound in Cluster configuration
[error] a.c.s.PersistentShardCoordinator - Persistence failure when replaying ev
ents for persistenceId [/sharding/ProductCoordinator]. Last known sequence numbe
r [0]
com.datastax.driver.core.exceptions.AuthenticationException: Authentication erro
r on host /10.15.2.179:9042: Host /10.15.2.179:9042 requires authentication, but
no authenticator found in Cluster configuration
at com.datastax.driver.core.AuthProvider$1.newAuthenticator(AuthProvider
.java:40)
at com.datastax.driver.core.Connection$5.apply(Connection.java:250)
at com.datastax.driver.core.Connection$5.apply(Connection.java:234)
at com.google.common.util.concurrent.Futures$AsyncChainingFuture.doTrans
form(Futures.java:1442)
at com.google.common.util.concurrent.Futures$AsyncChainingFuture.doTrans
form(Futures.java:1433)
at com.google.common.util.concurrent.Futures$AbstractChainingFuture.run(
Futures.java:1408)
at com.google.common.util.concurrent.Futures$2$1.run(Futures.java:1177)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutorService
.execute(MoreExecutors.java:310)
at com.google.common.util.concurrent.Futures$2.execute(Futures.java:1174
)
I also tried providing this config
lagom.persistence.read-side {
cassandra {
}
}
How to make it work by providing credentials for Cassandra?
In Lagom, you may already use akka-persistence-cassandra settings for your journal and snapshot-store (see reference.conf in the source code, and scroll down for cassandra-snapshot-store.authentication.*). There's no need to configure it because Lagom's support for Cassandra persistence already declares akka-persistence-cassandraas the Akka Persistence implementation:
akka.persistence.journal.plugin = cassandra-journal
akka.persistence.snapshot-store.plugin = cassandra-snapshot-store
See https://github.com/lagom/lagom/blob/c63383c343b02bd0c267ff176bfb4e48c7202d7d/persistence-cassandra/core/src/main/resources/play/reference-overrides.conf#L5-L6
The third last bit to configure when connecting Lagom to Cassandra is Lagom's Read-Side. That is also doable via application.conf if you override the defaults.
Note how each storage may use a different Cassandra Ring/Keyspace/credentials/... so you can tune them separately.
See extra info in the Lagom docs.

Connecting to Cassandra with Spark

First, I have bought the new O'Reilly Spark book and tried those Cassandra setup instructions. I've also found other stackoverflow posts and various posts and guides over the web. None of them work as-is. Below is as far as I could get.
This is a test with only a handful of records of dummy test data. I am running the most recent Cassandra 2.0.7 Virtual Box VM provided by plasetcassandra.org linked from the main Cassandra project page.
I downloaded Spark 1.2.1 source and got the latest Cassandra Connector code from github and built both against Scala 2.11. I have JDK 1.8.0_40 and Scala 2.11.6 setup on Mac OS 10.10.2.
I run the spark shell with the cassandra connector loaded:
bin/spark-shell --driver-class-path ../spark-cassandra-connector/spark-cassandra-connector/target/scala-2.11/spark-cassandra-connector-assembly-1.2.0-SNAPSHOT.jar
Then I do what should be a simple row count type test on a test table of four records:
import com.datastax.spark.connector._
sc.stop
val conf = new org.apache.spark.SparkConf(true).set("spark.cassandra.connection.host", "192.168.56.101")
val sc = new org.apache.spark.SparkContext(conf)
val table = sc.cassandraTable("mykeyspace", "playlists")
table.count
I get the following error. What is confusing is that it is getting errors trying to find Cassandra at 127.0.0.1, but it also recognizes the host name that I configured which is 192.168.56.101.
15/03/16 15:56:54 INFO Cluster: New Cassandra host /192.168.56.101:9042 added
15/03/16 15:56:54 INFO CassandraConnector: Connected to Cassandra cluster: Cluster on a Stick
15/03/16 15:56:54 ERROR ServerSideTokenRangeSplitter: Failure while fetching splits from Cassandra
java.io.IOException: Failed to open thrift connection to Cassandra at 127.0.0.1:9160
<snip>
java.io.IOException: Failed to fetch splits of TokenRange(0,0,Set(CassandraNode(/127.0.0.1,/127.0.0.1)),None) from all endpoints: CassandraNode(/127.0.0.1,/127.0.0.1)
BTW, I can also use a configuration file at conf/spark-defaults.conf to do the above without having to close/recreate a spark context or pass in the --driver-clas-path argument. I ultimately hit the same error though, and the above steps seem easier to communicate in this post.
Any ideas?
Check the rpc_address config in your cassandra.yaml file on your cassandra node. It's likely that the spark connector is using that value from the system.local/system.peers tables and it may be set to 127.0.0.1 in your cassandra.yaml.
The spark connector uses thrift to get token range splits from cassandra. Eventually I'm betting this will be replaced as C* 2.1.4 has a new table called system.size_estimates (CASSANDRA-7688). It looks like it's getting the host metadata to find the nearest host and then making the query using thrift on port 9160.

Resources