I am using JMX MBeans for monitoring Cassandra metrics. I am looking to enable CF : Local Reads equivalent in opscenter, does anyone know what is the equivalent JMX Mbean for it?
The metrics are listed out here: http://cassandra.apache.org/doc/latest/operating/metrics.html#table-metrics
The ReadLatency mbean on the table metrics so: org.apache.cassandra.metrics:type=Table,keyspace=YOURKEYSPACE,scope=YOURTABLE,name=ReadLatency
or type=ColumnFamily for older versions of C*.
OpsCenter uses the values operation to get the raw histogram so it may look slightly different than when reading the mbeans directly, but the decaying histogram is less accurate when monitoring over time so going based on raw values is better. It is described in this presentation.
Related
Is there any way to log queries along with user that executed the query in Cassandra community edition?
I'm looking for a Server level solution, not driver/client based solution
Thanks!
Try nodetool settraceprobability
nodetool settraceprobability <value>
Sets the probability for tracing a request.
Value is a probability between 0 and 1.
Tracing a request usually requires at least 10 rows to be inserted.
A probability of 1.0 will trace everything whereas lesser amounts (for example, 0.10) only sample a certain percentage of statements.
The trace information is stored in a system_traces keyspace that holds two tables – sessions and events, which can be easily queried to answer questions, such as what the most time-consuming query has been since a trace was started. Query the parameters map and thread column in the system_traces.sessions and events tables for probabilistic tracing information.
Note : Care should be taken on large and active systems, as system-wide tracing will have a performance impact. Unless you are under very light load, tracing all requests (probability 1.0) will probably overwhelm your system
If you don't want to use this, then you have log the query from the client side How to use Query Logger ?. There is no other way
Source : https://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsSetTraceProbability.html
I have setup new Cassandra 3.3 cluster. Then I use jvisualvm to monitor Cassandra read/write latency by using MBean (jmx metric).
The result of read/write latency is always stable in all nodes for many weeks whereas read/write request in that cluster have normally movement (heavy or less in some day).
As I use jvisualvm to monitor Cassandra 2.0 cluster. The read/write latency have normally behavior. It have movement depending on read/wire requests.
I wonder that Why the read/write latency statistics of Cassandra 3.0+ are always stable? And I think it is incorrect result. (I have load tested in Cassandra v3.3, v3.7).
[Updated]
I have found bug relate with this issue.
Cassandra metric flat. https://issues.apache.org/jira/browse/CASSANDRA-11752
The detail show that this problem has been solved in C* version 2.2.8, 3.0.9, 3.8. But after I have tested in version 3.0.9, The result of latency still show flat line.
Any Idea?
Thanks.
have not found any metrics problem When using C*3.3
first,try to monitor with jconsole,have met same issue?
second,which attribute do you see?avg value or percentage?there value always count from node up,so it is common to see percentage value is same.but not always happens on average value.try to restart cassandra node and check the value.
Based on metrics-reporter-config-sample.yaml, some of metrics are not exported by either CSVreporter or ConsoleReporter,
in particular:
org.apache.cassandra.metrics.DroppedMessage.+
org.apache.cassandra.metrics.ReadRepair.+
org.apache.cassandra.metrics.ColumnFamily.system.+
// or any other keyspace metrics
Observed with cassandra versions DSE 5.x and DDC-3.7.
However, the keyspace metrics can be found in e.g.: JCONSOLE.
(I've built and installed newer metrics reported JAR (reporter-config3-3.0.2.jar) but the same outcome)
Figured it out by myself, the patterns are different to the ones listed in sample config file.
So, keyspace and table metrics are:
org.apache.cassandra.metrics.keyspace.+
org.apache.cassandra.metrics.Table.+
metrics-reporter-config-sample.yaml needs to be updated to the new cassandra versions
The trick was to export all metrics by changing white-list to black-list:
predicate:
color: "black"
patterns:
- ".*JMXONLY$"
and then figure out the right patterns for white-list.
I have setup new Cassandra 3.3 cluster. Then I use jvisualvm to monitor Cassandra read/write latency by using MBean (jmx metric).
The result of read/write latency is always stable in all nodes for many weeks whereas read/write request in that cluster have normally movement (heavy or less in some day).
As I use jvisualvm to monitor Cassandra 2.0 cluster. The read/write latency have normally behavior. It have movement depending on read/wire requests.
I wonder that Why the read/write latency statistics of Cassandra 3.0+ are always stable? And I think it is incorrect result. (I have load tested in Cassandra v3.3, v3.7).
[Updated]
I have found bug relate with this issue.
Cassandra metric flat. https://issues.apache.org/jira/browse/CASSANDRA-11752
The detail show that this problem has been solved in C* version 2.2.8, 3.0.9, 3.8. But after I have tested in version 3.0.9, The result of latency still show flat line.
Any Idea?
Thanks.
have not found any metrics problem When using C*3.3
first,try to monitor with jconsole,have met same issue?
second,which attribute do you see?avg value or percentage?there value always count from node up,so it is common to see percentage value is same.but not always happens on average value.try to restart cassandra node and check the value.
How do I build a test that will tell me which Cassandra nodes are being written to, so I would want to specify number of nodes and replication factor and get back which nodes are affected by each write as the result of an attempted insert. this will tell me how evenly the data would be distributed at runtime. I have test data, so what i really need is a way to call mock Cassandra that's configured the way i would run in production that would return to me which node is affected.
I don't see a way to do that with the Cassandra stress tool, unless i am completely missing it...
Since you are interested in knowing all nodes that were impacted by a query, in I would recommend looking into tracing.
Here are a few approaches you could take:
Use cassandra-stress and enable tracing with nodetool settraceprobability on each of your C* nodes and set it to a low value like .01. This will enable query on 1% of your queries for which you can observe the results of the trace in the system via the system_traces.events and sessions tables (see this article for more information on how to use these tables). The trace will include information like which node was used as the coordinator, what other nodes were used as replicas for reads/writes and how long it took to process individual steps. Note that how your application will end up querying data may be slightly different then cassandra-stress since what nodes are queried is influenced by your Cluster configuration. cassandra-stress uses JavaDriverClient#connect. You will want to compare your configuration with what JavaDriverClient is doing and understand the differences. You could also modify JavaDriverClient to match your application.
You may also want to write a test against your application that uses cassandra. The java-driver has an API for enabling tracing and observing the data which I've documented in a video here. Additionally when you get a ResultSet back, there is a method getExecutionInfo() that provides information such as which hosts were tried, but this only includes nodes that were used as a coordinator, not all the replicas.