Read cassandra matrics using JMX in Java - cassandra

How can i produce live metrics of Cassandra in Java using JMX/Metrics? I want to run cassandra JMX command to collect cassandra matrices.Examples will be much appreciated.

All Cassandra's metrics exposed via JMX are documented in official documentation. And because it uses the Metrics library, you may not need to use JMX to capture metrics - see the note at the end of the referenced page for more information (and conf/metrics-reporter-config-sample.yaml example file from Cassandra's distribution).
P.S. Maybe I misunderstood the question - can you provide more details? Are you looking for commands to collect that metrics from Cassandra? Or code snippets in Java?
From Java you can access the particular metrics with something like this:
JMXServiceURL url = new JMXServiceURL(
"service:jmx:rmi:///jndi/rmi://[127.0.0.1]:7199/jmxrmi");
JMXConnector jmxc = JMXConnectorFactory.connect(url, null);
MBeanServerConnection mbsc = jmxc.getMBeanServerConnection();
Set<ObjectInstance> objs = mbsc.queryMBeans(ObjectName
.getInstance("org.apache.cassandra.metrics:type=ClientRequest,scope=Read-ALL,name=TotalLatency"), null);
for (ObjectInstance obj : objs) {
Object proxy = JMX.newMBeanProxy(mbsc, obj.getObjectName(),
CassandraMetricsRegistry.JmxCounterMBean.class);
if (proxy instanceof CassandraMetricsRegistry.JmxCounterMBean) {
System.out.println("TotalLatency = " + ((CassandraMetricsRegistry.JmxCounterMBean) proxy).getCount());
}
}
jmxc.close();
More detailed example you can find at JmxCollector from cassandra-metrics-collector project...

Related

How to pass session parameters with python to snowflake?

The below code is my attempt at passing a session parameter to snowflake through python. This part of an existing codebase which runs in AWS Glue, & the only part of the following that doesn't work is the session_parameters.
I'm trying to understand how to add session parameters from within this code. Any help in understanding what is going on here is appreciated.
sf_credentials = json.loads(CACHE["SNOWFLAKE_CREDENTIALS"])
CACHE["sf_options"] = {
"sfURL": "{}.snowflakecomputing.com".format(sf_credentials["account"]),
"sfUser": sf_credentials["user"],
"sfPassword": sf_credentials["password"],
"sfRole": sf_credentials["role"],
"sfDatabase": sf_credentials["database"],
"sfSchema": sf_credentials["schema"],
"sfWarehouse": sf_credentials["warehouse"],
"session_parameters": {
"QUERY_TAG": "Something",
}
}
In AWS Cloudwatch, I can find the parameter was sent with the other options. In snowflake, the parameter was never set.
I can add more detail where necessary, I just wasn't sure what details are needed.
It turns out that there is no need to specify that a given parameter is a session parameter when you are using the Spark Connector. So instead:
sf_credentials = json.loads(CACHE["SNOWFLAKE_CREDENTIALS"])
CACHE["sf_options"] = {
"sfURL": "{}.snowflakecomputing.com".format(sf_credentials["account"]),
"sfUser": sf_credentials["user"],
"sfPassword": sf_credentials["password"],
"sfRole": sf_credentials["role"],
"sfDatabase": sf_credentials["database"],
"sfSchema": sf_credentials["schema"],
"sfWarehouse": sf_credentials["warehouse"],
"QUERY_TAG": "Something",
}
Works perfectly.
I found this in the Snowflake Documentation for Using the Spark Connector: Here's the section on setting Session Parameters

Does ArangoDB java driver provides API for GEO_INTERSECTS

For library arangodb-spring-data, version 3.2.3, is there any possibility to query for GEO_INTERSECTS functionality using the java api provided by the driver?
Currently I am using an AQL query in my code:
LET areaLiteral = GEO_POLYGON(...)
FOR doc IN MyDocuments
FILTER GEO_INTERSECTS(areaLiteral, doc.geometry)
LIMIT 5 RETURN doc
So far in the official documentation couldn't find anything related to GEO_INTERSECTS, also in this example: https://github.com/arangodb/spring-data-demo#geospatial-queries
I have checked the source code of the driver, but didn't find anything related to keyword "INTERSECTS" which would construct this query behind the scenes.
It is not yet supported, the only supported geospatial queries are Near and Within:
https://www.arangodb.com/docs/3.6/drivers/spring-data-reference-repositories-queries-derived-queries.html#geospatial-queries

Not loading data from titan graph with cassandra backend using gremlin.

I have added data in titan(cassandra backend) using blueprint api with java. I used following configuration in java for inserting data.
TitanGraph getTitanGraph()
{
conf2 = new BaseConfiguration();
conf2.setProperty("storage.backend", "cassandra");
conf2.setProperty("storage.directory","/some/directory");
conf2.setProperty("storage.read-only", "false");
conf2.setProperty("attributes.allow-all", true);
return TitanFactory.open(conf2);
}
Now I am trying to query that database using gremlin. I used following cmd to load it
g = TitanFactory.open("bin/cassandra.local");
following is my cassandra.local file
conf = new BaseConfiguration();
conf.setProperty("storage.backend","cassandra");
conf.setProperty("storage.hostname","127.0.0.1");
conf.setProperty("storage.read-only", "false");
conf.setProperty("attributes.allow-all", true)
but when I am running "g.V", I am getting empty graph. Please help
thanks
Make sure that you commit the changes to your TitanGraph after making graph mutations in your Java program. If you're using Titan 0.5.x, the call is graph.commit(). If you're using Titan 0.9.x, the call is graph.tx().commit().
Note that storage.directory isn't valid for a Cassandra backend, however the default value for storage.hostname is 127.0.0.1 so those should be the same between your Java program and cassandra.local. It might be easier to use a properties file to store your connection properties.

Storm Cassandra Integeration

I am newbie in both Storm and Cassandra. I want to use a Bolt to write the strings emitted by a Spout, in a column family in Cassandra. I have read the example here which seems a little bit complex for me, as it uses different classes for writing in the Cassandra DB. Furthermore, I want to know how many times the strings are written in the Cassandra DB. In the example, for me, it is not clear how we can control the number of strings entered in the Cassandra DB?
Simply, I need a Bolt to write the emitted strings by a Spout to a Cassandra column family e.g., 200 records?
Thanks in advance!
You can either use Datastax Cassandra Driver or your can you the storm-cassandra library you posted earlier.
Your requirements is unclear. You only want to store 200 tuples?
Any way, run the topology with sample data and after the stream is finished, query Cassandra and see what is there.
Apache Storm and Apache Cassandra are quite deep and extensive projects. There is no walk around learning them and do sample projects in order to learn.
hope this will help.
/*Main Class */
TopologyBuilder builder = new TopologyBuilder();
Config conf = new Config();
conf.put("cassandra.keyspace", "Storm_Output"); //Key_space name
conf.put("cassandra.nodes","ip-address-of-cassandra-machine");
conf.put("cassandra.port",9042);
//port on which cassandra is running (Default:9042)
builder.setSpout("generator", new RandomSentenceSpout(), 1);
builder.setBolt("counter", new CassandraInsertionBolt(), 1).shuffleGrouping("generator");
builder.setBolt("CassandraBolt",new CassandraWriterBolt(
async(
simpleQuery("INSERT INTO Storm_Output.tanle_name (field1,field2 ) VALUES (?,?);")
.with(
fields("field1","field2 ")
)
)
), 1).globalGrouping("counter");
// Config conf = new Config();
conf.setDebug(true);
conf.setNumWorkers(1);
StormSubmitter.submitTopologyWithProgressBar("Cassnadra-Insertion", conf, builder.createTopology());
/*Bolt sending data for insertion into cassandra */
/*CassandraWriter Bolt */
public void execute(Tuple tuple, BasicOutputCollector basicOutputCollector) {
Random rand=new Random();
basicOutputCollector.emit(new Values(rand.nextInt(20),rand.nextInt(20)));
}
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
// TODO Auto-generated method stub
outputFieldsDeclarer.declare(new Fields("field1","field2"));
}
}

How to create and retrieve a graph database using java api in cassandra database

I am trying to create a graph having nodes and edges with some weights in Cassandra database using Titan Graph api . so how to retrieve this graph so that I could visualize it.
rexster or gremlin is the solution for it.. ?? If it so.. Please tell me the process.
I have a 4 cluster cassandra setup. I run titan on top of it. I made this program which creates two nodes and then create an edge between them and finally queries and prints it.
public static void main(String args[])
{
BaseConfiguration baseConfiguration = new BaseConfiguration();
baseConfiguration.setProperty("storage.backend", "cassandra");
baseConfiguration.setProperty("storage.hostname", "192.168.1.10");
TitanGraph titanGraph = TitanFactory.open(baseConfiguration);
Vertex rash = titanGraph.addVertex(null);
rash.setProperty("userId", 1);
rash.setProperty("username", "rash");
rash.setProperty("firstName", "Rahul");
rash.setProperty("lastName", "Chaudhary");
rash.setProperty("birthday", 101);
Vertex honey = titanGraph.addVertex(null);
honey.setProperty("userId", 2);
honey.setProperty("username", "honey");
honey.setProperty("firstName", "Honey");
honey.setProperty("lastName", "Anant");
honey.setProperty("birthday", 201);
Edge frnd = titanGraph.addEdge(null, rash, honey, "FRIEND");
frnd.setProperty("since", 2011);
titanGraph.commit();
Iterable<Vertex> results = rash.query().labels("FRIEND").has("since", 2011).vertices();
for(Vertex result : results)
{
System.out.println("Id: " + result.getProperty("userId"));
System.out.println("Username: " + result.getProperty("username"));
System.out.println("Name: " + result.getProperty("firstName") + " " + result.getProperty("lastName"));
}
}
My pom.xml, I just have this dependency:
<dependency>
<groupId>com.thinkaurelius.titan</groupId>
<artifactId>titan-cassandra</artifactId>
<version>0.5.4</version>
</dependency>
I am running Cassandra 2.1.2.
I don’t know much on gremlin, but I believe Gremlin is the shell that you can use to query your database from the command line. Rexter is an API that you can use on top of any other API that uses Blueprints (like Titan), to make your query/code accessible to others via REST API. Since you want to use embedded Java with titan, you don’t need gremlin. With the dependency that I have stated, blueprint API comes with it and using that API (like I have done in my code), you can do everything with your graph.
You might find this cheatsheet useful. http://www.fromdev.com/2013/09/Gremlin-Example-Query-Snippets-Graph-DB.html
First note that TitanGraph uses the Blueprints API, thus the Titan API is the Blueprints API. As you are using Blueprints you can use Gremlin, Rexster or any part of the TinkerPop stack to process your graph.
How you visualize your graph once in Titan is dependent on the graph visualization tools you choose. If I assume you are using Gephi or similar tool that can consume GraphML, then the easiest way to get the data from Titan would be to open a Gremlin REPL, get a reference to the graph and just do:
g = TitanFactory.open(...)
g.saveGraphML('/tmp/my-graph.xml')
From there you could import my-graph.xml into Gephi and visualize it.

Resources