cassandra 2.0.4 Unable to initialize MemoryMeter - cassandra

I've upgraded cassandra from 1.2.13 to 2.0.4 on a cluster of 5 nodes.
when i run nodetool -h localhost ring I see this errormessage in the end:
ERROR 10:33:28,324 Unable to initialize MemoryMeter (jamm not specified as javaagent). This means Cassandra will be unable to measure object sizes accurately and may consequently OOM.
according to this:
https://issues.apache.org/jira/browse/CASSANDRA-6404
it should be fixed.
i'm running java-1.7.0-oracle-1.7.0.45-1jpp.2.el6_4.x86_64.
this is the beginning of the process options:
/usr/lib/jvm/java-1.7.0-oracle-1.7.0.45.x86_64/jre/bin/java -ea -javaagent:/usr/share/cassandra//lib/jamm-0.2.5.jar
is there anyone who could point me in a direction where too look for a solution?
these error's, are they serious or mere cosmetic?
//john

If this error only emits from tools, then ignore. If you also see it in your output.log or system.log, then it can be problematic. It seems your version of JVM is good. If the command prompt you pasted here is from Cassandra process, then you are good. Tools have a different script to initialize their environment. Check cassandra/bin folder and inspect tools' scripts to see if they include this change:
https://issues.apache.org/jira/secure/attachment/12615604/0001-Set-javaagent-when-running-tools-in-bin.patch
Chances are somehow your upgrade process didn't change them.

Related

Updating a galera-cluster to 10.3.15 with MDEV-17458

I just updated a mariadb/galera-cluster to db version 10.3.15. It won't work correctly without at least 2 nodes up, but trying to start any node past the 1st runs into strange error messages, like: .
0 [Warning] WSREP: SST position can't be set in past. Requested: 0, Current: 14422308.
0 [Warning] WSREP: Can't continue.
This bug may be related:
https://jira.mariadb.org/browse/MDEV-17458?attachmentViewMode=list
However, I notice one peculiarity: the requested state is 0, quite possibly because it's lost somewhere along the way, or because I'm experiencing an entirely different issue.
I also know what it should be: the value that it thinks is 'current'.
In other words, reality is the exact opposite of what this node thinks is true: the 'current' should be 0, the 'requested' should be 14422308.
In a related issue:
https://jira.mariadb.org/browse/MDEV-19193
someone off-hand comments about deleting some files in order to start from a pristine case, but isn't exactly clear what exactly to do where.
I do not mind starting from the data on one node, ignoring everything on the other nodes and copying everything over.
I tried deleting the following file(s) from the offending nodes. (I believe the data directory they're mentioning is /var/lib/mysql/ on most linux systems):
galera.cache
ib_logfile0
ib_logfile1
This has no effect.
Someone over at this question: Unable to complete SST transfer due to "WSREP: SST position can't be set in past." error suggests changing the SST number on the node that's still OK. But that won't work: I can only start that node if I use the 'galera_new_cluster' script, which resets its SST number to '-1', no matter what it was. If I start it normally, I get an error like this:
[ERROR] WSREP: wsrep::connect(gcomm://<IP1>,<IP2>,<IP3>,...) failed: 7
In other words, there's not enough other nodes online to join the cluster. So in order to change the SST on the primary node, another node needs to be online, but in order to start up the other node, I need to change the SST on the primary? Catch-22, won't work.
It's nice that they fixed the bug, but how do I fix my now broken cluster?
One more question I've asked myself is this: Does this 'SST number' of 14422308 originate from the node that's trying to re-join the cluster, or is it retrieved from the cluster? Apparently, the second thing is true, for even completely reinstalling the secondary node from scratch and trying to re-join the cluster with it will not solve the problem. The exact same error message stays.
Somehow, the cluster appears to have gotten confused as to its own state. The JOINER nodes in each synchronization step think they have a more advanced state than the DONOR nodes.
The solution to this problem is to trick the cluster; to force it to recognize some node as 'more advanced'.
Suppose we can identify one node that has complete cluster data. Denote this to be the '1st node'. Pick one node to be the 2nd, one to be the 3rd, etc. (These choices can be at-random).
Then, stop mysql on all nodes. Edit the configuration file for the cluster and change the value for 'wsrep_cluster_address' on each node. It should be the following:
+------+---------------------------+
| Node | wsrep_cluster_address |
+------+---------------------------+
| 1 | gcomm:// |
| 2 | gcomm://<IP1>,<IP2> |
| 3 | gcomm://<IP1>,<IP2>,<IP3> |
+------+---------------------------+
(The pattern continues like this for the fourth and any further nodes in the cluster).
Now remove all cached data from the nodes other than the first. These are the files:
ib_logfile*
grastate.dat
gvwstate.dat
galera.cache
situated in the data dir of the mysql installation. (Example; /var/lib/mysql/ on debian systems).
Then edit the "grastate.dat" file on node #1. In our example, the most advanced state the cluster has yet seen is 14422308. Thus set it to 14422309 (or: old state + 1). Also set safe_to_bootstrap to 0 on all nodes (so we don't accidentally try to bootstrap and lose our seqno, running into the same bug again).
Now start mysql on node #1 (example, via systemd: systemctl start mysql).
Once it's running, do the same on node #2. Wait for all the data to transfer (this may take a while depending on the inter-node connection speed and the size of the database in question), then repeat for node 3, and any further nodes.
Afterwards, restore the value for wsrep_cluster_address in every configuration to what it should be (which is equal to the value for the last node).

Hadoop Yarn Stuck at Running Job Ubuntu

I was trying to run single mode pseudo distributed mode hadoop on ec2 ubuntu machine on a simple task. However my program is stuck at running jobs. I attach my linux screen and Resourcemanager page here.Any idea is appreciated. Thanks.
Add1: the other thing I found is the NodeManager disappears when I type jps (It was there the first time I type jps but disappears later)
Add2: I checked nodemanager log and noticed that it was shutdown due to that minimum allocation is not satisfied, though I had changed scheduler minimum mb to 128 and vcore to 1 in yarn-site.xml
Finally solved my own problem above.
Added details to the problem statement above, I later found my node manager didn't start well (first time type jps, node manager is there as shown in the attached picture. However, it disappeared seconds after that. I found this by type jps many times).
I checked the node manager log and found the error is "doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager"
I searched on the internet and found the solution is adding the following property into yarn-site.xml file
yarn.nodemanager.resource.memory-mb
value is 1024
yarn.nodemanager.resource.cpu-vcores
value is 1
Then restart start-dfs.sh and start-yarn.sh. If you get into hostname:8088, you would see the memory total and total vcores total should be larger than 0 now and you can run an application
PS: my application was stuck at map 0% and reduce 0% initially. Then I simply change 1024 to 4096, 1 to 2 above. Then I can successfully run a map reduce program on my ec2 instance (single node pseudo distributed)

ERROR 1777 (HY000): Partition memsqldb:0 has no master instance

I am using community edition of memsql. I got this error while i was running a query today. So i just restarted my cluster and got this error solved.
memsql-ops cluster-restart
But what happened and what should i do in future to avoid this error ?
NOTE
I donot want to buy the Enterprise edition.
Question
Is this problem of Availability ?
I got this error when experimenting with performance.
VM had 24 CPUs and 25 nodes: 1 Master Agg, 24 Leaf nodes
Reduced VM to 4 CPUs and restarted cluster.
All the leaves did not recover.
All except 4 recovered in < 5 minutes.
20 minutes later, 4 leaf nodes still were not connected.
From MySQL/MemSQL prompt:
use db;
show partitions;
I notice some partitions with ordinal from 0-71 for me have null instead Host, Port, Role defined for a given partition.
In memsql ops UI http://server:9000 > Settings > Config > Manual Cluster Control I checked "ENABLE MANUAL CONTROL" while I tried to run various commands with no real benefit.
Then 15 minutes later, I unchecked the box, Memsql-ops tried attaching all the leaf nodes again and was finally successful.
Perhaps a cluster restart would have done the same thing.
This happened because a leaf in your cluster has failed a health check heartbeat for some reason (loss of network connectivity, hardware failure, OS issue, machine overloaded, out of memory, etc.) and its partitions are no longer accessible to query. MemSQL Community Edition only supports redundancy 1 so there are no other copies of the data on the failed leaf node in your cluster (thus the error about missing a partition of data - MemSQL can't complete a query that needs to read data on any partitions on the problem leaf).
Given that a restart repaired things, the most likely answer is that linux "out of memory" killed you: MemSQL Linux OOM killer docs
You can also check the tracelog on the leaf that ran into issues to see if there is any clue there about what happened (It's usually at /var/lib/memsql/leaf_3306/tracelogs/memsql.log)
-Adam
I too have faced this error, that was because some of the slave ordinals had no corresponding masters. My error message looked like:
ERROR 1772 (HY000) at line 1: Leaf Error (10.0.0.112:3306): Partition database `<db_name>_0` can't be promoted to master because it is provisioning replication
My memsql> SHOW PARTITIONS; command returned the following.
So what approach I followed was to remove each of such cases (where the role was either Slave or NULL).
DROP PARTITION <db_name>:4 ON "10.0.0.193":3306;
..
DROP PARTITION <db_name>:46 ON "10.0.0.193":3306;
And then created a new partition with each of the dropped partition.
CREATE PARTITION <db_name>:4 ON "10.0.0.193":3306;
..
CREATE PARTITION <db_name>:46 ON "10.0.0.193":3306;
And this was the result of memsql> SHOW PARTITIONS; after that.
You can refer to the MemSQL Documentation regarding partitions, here if the above steps doesn't seem to solve your problem.
I was hitting the same problem. Using the following command in the master node, solved the problem:
REBALANCE PARTITIONS ON db_name
Optionally you can force it using FORCE:
REBALANCE PARTITIONS ON db_name FORCE
And to see the list of operations when rebalancing is going to execute, use above command with EXPLAIN:
EXPLAIN REBALANCE PARTITIONS ON db_name [FORCE]

Index initializer warning during bootstrap

I'm trying to simultaneously add 4 nodes to my current 2-node DC. I have Vnodes turned off as per Datastax suggestion. Right after the major index build in each node, the following warning is printed several times in the logs:
WARN [SolrSecondaryIndex ks.cf index initializer.] 2014-06-20
09:39:59,904 CassandraUtil.java (line 108) Error Operation timed out -
received only 3 responses. on attempt 1 out of 4 with CL QUORUM...
I understand what it means. But why is Cassandra expecting the nodes to fulfill the CL when these nodes are still bootstrapping? More importantly, how does the warning affect the bootstrap? I noticed that the nodes are not doing any index build or streaming anymore; but they also remained in "Active - Joining" state. Is there any chance that they will finish? What should I do?
I'm using DSE 4.0.3. All existing and new nodes in the DC are Search nodes. I pre-computed the tokens using the python program for MurMur3Partitioner.
EDIT:
Although nodetool compactionstats does not show any on-going index build in the nodes, for some reason, I still see a lot of these lines in the logs:
INFO [IndexPool backpressure thread-0] 2014-06-20 12:30:31,346 IndexPool.java (line 472) Throttling at 26 index requests per second with target total queue size at 40
INFO [IndexPool backpressure thread-0] 2014-06-20 12:30:34,169 IndexPool.java (line 428) Back pressure is active with total index queue size 18586 and average processing time 2770
EDIT:
Interestingly, I found the following lines in each node after digging through the log files:
INFO [main] 2014-06-20 09:39:48,588 StorageService.java (line 1036) Bootstrap completed! for the tokens [node token]
INFO [SolrSecondaryIndex ks.cf index initializer.] 2014-06-20 11:32:07,833 AbstractSolrSecondaryIndex.java (line 411) Reindexing 1417116631 commit log updates for core ks.cf
Based from these lines, I feel a lot safer that the bootstrap actually completed and that the nodes are simply re-indexing their data. I don't know, though, why the re-indexing process is not being shown in nodetool compactionstats.
It appears the bootstrap completed, and the DSE Search system is running normally.
why the re-indexing process is not being shown in nodetool compactionstat
DSE Search is not generally exposed via Cassandra command line tools. The log output should show the indexing as having completed, were you able to verify that?

In Cassandra 1.2 - CQL 3 is it possible to abort a secondary index build?

Been using a 6GB dataset with each source record being ~1KB in length when I accidentally added an index on a column that I am pretty sure has a 100% cardinality.
Tried dropping the index from cqlsh but by that point the two node cluster had gone into a run away death spiral with loadavg surpassing 20 on each node and cqlsh hung on the drop command for 30 minutes. Since this was just a test setup, I shut-down and destroyed the cluster and restarted.
This is a fairly disconcerting problem as it makes me fear a scenario where a junior developer is on a production cluster and they set an index on a similar high cardinality column. I scanned through the documentation and looked at the options in nodetool but there didn't seem to be anything along the lines of "abort job or abort building index".
Test environment:
2x m1.xlarge EC2 instances with 2 Raid 0 ephemeral disks
Dataset was 6GB, 1KB per record.
My question in summary: Is it possible to abort the process of building a secondary index AND or possible to stop/postpone running builds (indexing, compaction) for a later date.
nodetool -h node_address stop index_build
See: http://www.datastax.com/docs/1.2/references/nodetool#nodetool-stop

Resources