Getting Invalid Yaml exception after we upgraded java version - cassandra

We have updated the java minor version and Cassandra version as given below.
Older Java version "1.8.0_144"
Newer Java version "1.8.0_261"
Older Cassandra version : 2.1.16
Newer Cassandra version : 3.11.2
Now we are getting below exception :
INFO [main] 2020-08-19 10:55:09,364 YamlConfigurationLoader.java:89 - Configuration
location: file:/etc/cassandra/default.conf/cassandra.yaml
Exception (org.apache.cassandra.exceptions.ConfigurationException) encountered during startup: Invalid yaml: file:/etc/cassandra/default.conf/cassandra.yaml
Error: null; Can't construct a java object for tag:yaml.org,2002:org.apache.cassandra.config.Config; exception=java.lang.reflect.InvocationTargetException; in 'reader', line 1, column 1:
cluster_name: VV Cluster
We did not change "cassandra.yaml". Same yaml is working on other nodes running on older versions.

Sometimes the issue is not with YAML. As suggested in this Datastax post, you can run a tool utlility like cassandra-stress which bypass the parser. This may help in identifying the real reason of failure.
Sometime it happens that YAML exception eats up the main issue and it appears as invalid yaml exception.

Related

ClassNotFoundException for parquet on CentOS 7.3

I have been using spark-1.5.2 built with hadoop-1.0.4 along with spark-csv_2.10-1.4.0 and commons-csv:1.1(for reading data). I run naivebayes/randomforest algorithms in CentOS 6.7 and it is working fine. When I upgraded the OS to 7.3, I get the following exception :
Caused by: java.lang.ClassNotFoundException: Failed to load class for data source: parquet.
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:67)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:167)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:146)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:137)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:304)
at org.apache.spark.mllib.classification.NaiveBayesModel$SaveLoadV2_0$.save(NaiveBayes.scala:206)
at org.apache.spark.mllib.classification.NaiveBayesModel.save(NaiveBayes.scala:169)
at com.zlabs.ml.core.algo.classification.naivebayes.NaiveBayesClassificationImpl.createModel(NaiveBayesClassificationImpl.java:111)
... 10 more
This exception happens when I save the model. Training is completed successfully.
Is there anyone else facing this issue ? Are there any OS level package dependencies for spark?

Cassandra configuration error in cassandra 3.10

I am getting following error when I am trying to run cassandra for the first time. any suggestion?
CassandraDaemon.java:752 - Exception encountered during startup: Invalid yaml: file:/D:/apache-cassandra-3.10/conf/cassandra.yaml
Error: null; Can't construct a java object for tag:yaml.org,2002:org.apache.cassandra.config.Config; exception=java.lang.reflect.InvocationTargetException; in 'reader', line 10, column 1:
cluster_name: 'Test Cluster'
^
check the cassandra.yaml file. Even if it seems to have correct format, there must be a space after : for each property.
In my case I modified seeds like below:
seeds:"IP1,IP2"
which caused the same error.
Misleadingly the error indicates the cluster_name.
I put the space after colon
seeds: "IP1,IP2"
and started Cassandra service. The error disappeared and Cassandra was able to start successfully.

Cassandra upgrade from 2.2.1 to 3.0.0 fails with NullPointerException

I tried upgrading Cassandra from 2.2.1 to 3.0.0 but Cassandra doesn't start:
ERROR [main] 2015-11-30 15:44:50,164 CassandraDaemon.java:702 - Exception encountered during startup
java.lang.NullPointerException: null
at org.apache.cassandra.io.util.FileUtils.delete(FileUtils.java:374) ~[apache-cassandra-3.0.0.jar:3.0.0]
at org.apache.cassandra.db.SystemKeyspace.migrateDataDirs(SystemKeyspace.java:1341) ~[apache-cassandra-3.0.0.jar:3.0.0]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:180) [apache-cassandra-3.0.0.jar:3.0.0]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:561) [apache-cassandra-3.0.0.jar:3.0.0]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689) [apache-cassandra-3.0.0.jar:3.0.0]
Anyone else faces this problem?
I also raised an issue here:
https://issues.apache.org/jira/browse/CASSANDRA-10788
Here is TRACE level logging:
TRACE [MemtablePostFlush:1] 2015-12-01 16:47:52,675 ColumnFamilyStore.java:868 - forceFlush requested but everything is clean in schema_columns
TRACE [main] 2015-12-01 16:47:52,675 ColumnFamilyStore.java:1563 - Snapshot for Keyspace(name='system') keyspace data file /data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/la-46-big-Data.db created in /data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/snapshots/1448984872341-upgrade-2.2.1-3.0.0
TRACE [main] 2015-12-01 16:47:52,676 ColumnFamilyStore.java:1563 - Snapshot for Keyspace(name='system') keyspace data file /data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/la-45-big-Data.db created in /data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/snapshots/1448984872341-upgrade-2.2.1-3.0.0
TRACE [main] 2015-12-01 16:47:52,676 ColumnFamilyStore.java:1563 - Snapshot for Keyspace(name='system') keyspace data file /data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/la-47-big-Data.db created in /data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/snapshots/1448984872341-upgrade-2.2.1-3.0.0
TRACE [main] 2015-12-01 16:47:52,676 SystemKeyspace.java:1327 - Checking directory /data for old files
ERROR [main] 2015-12-01 16:47:52,751 CassandraDaemon.java:702 - Exception encountered during startup
java.lang.NullPointerException: null
at org.apache.cassandra.io.util.FileUtils.delete(FileUtils.java:374) ~[apache-cassandra-3.0.0.jar:3.0.0]
at org.apache.cassandra.db.SystemKeyspace.migrateDataDirs(SystemKeyspace.java:1341) ~[apache-cassandra-3.0.0.jar:3.0.0]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:180) [apache-cassandra-3.0.0.jar:3.0.0]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:561) [apache-cassandra-3.0.0.jar:3.0.0]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689) [apache-cassandra-3.0.0.jar:3.0.0]
It looks like the code is expected legacy files under /data directory but it can't find any and call FileUtils.delete(null).
One thing that may impact your migration is the 2.1.x to 3.0 conversion is not recommended by Datastax.
Cassandra 3.0.x restrictions
Upgrade from Cassandra 2.1 versions greater or equal to 2.1.9 or from
Cassandra 2.2 versions greater or equal to 2.2.2.
*Emphasis added
The null pointer could be happening due to non-existent files or a duplicated delete call.
I would try upgrading to at least 2.2.2 before the 3.0 migration.
I was able to fix the problem by using this patch:
https://github.com/stef1927/cassandra/commit/1c464adf097d323320ce11db6daf05e1a31c62b6
More details:
https://issues.apache.org/jira/browse/CASSANDRA-10788
And thanks, I'll see if upgrading from 2.2.1 to 2.2.2 first helps.

Cassandra 2.1.8: Node refuses to start with NPE in removeUnfinishedCompactionLeftovers

I am trying to upgrade a Cassandra 2.1.0 cluster to 2.1.8 (latest release).
When I start a first node with 2.1.8 runtime, I get an error and the node refuses to start.
This is the error's stack trace :
org.apache.cassandra.io.FSReadError: java.lang.NullPointerException
at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:642) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:302) [apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:524) [apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:613) [apache-cassandra-2.1.8.jar:2.1.8]
Caused by: java.lang.NullPointerException: null
at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:634) ~[apache-cassandra-2.1.8.jar:2.1.8]
... 3 common frames omitted
FSReadError in Failed to remove unfinished compaction leftovers (file: /home/nudgeca2/datas/data/main/segment-97b5ba00571011e49a928bffe429b6b5/main-segment-ka-15432-Statistics.db). See log for details.
at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:642)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:302)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:524)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:613)
Caused by: java.lang.NullPointerException
at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:634)
... 3 more
Exception encountered during startup: java.lang.NullPointerException
The cluster has 7 nodes and it turns on AWS Linux EC2 instances.
The node I try to upgrade was stopped after a nodetool drain.
Then I tried to come back to 2.1.0 runtime but I now get a similar error.
I also tried to stop and start another node and everything was ok, the node restarted without any problem.
I tried to touch the missing file (as it should be removed, I thought it would perhaps not need a specific content). I had two other files with the same error that I also touched. And finally the node fails further while trying to read these files.
Anyone has any idea what I should do ?
Thank you for any help.
It might be worth opening a Jira for that issue, so if nothing else, they can catch the NPE and provide a better error message.
It looks like it's trying to open:
file: /home/nudgeca2/datas/data/main/segment-97b5ba00571011e49a928bffe429b6b5/main-segment-ka-15432-Statistics.db
It's possible that it's trying to read that file because it finds the associated data file: (/home/nudgeca2/datas/data/main/segment-97b5ba00571011e49a928bffe429b6b5/main-segment-ka-15432-Data.db). Does that data file exist? I'd be tempted to move it out of the way, and see if it starts properly.

Dataxtax Agent error

While adding existing cluster in OpsCenter I receive an error:
ERROR: Agent for XXX.XXX.XXX.XXX was unable to complete operation (http://XXX.XXX.XXX.XXX:61621/snapshots/pit/properties?): java.lang.IllegalArgumentException: No implementation of method: :make-reader of protocol: #'clojure.java.io/IOFactory found for class: nil
On agent there is an error:
java.lang.IllegalArgumentException: No implementation of method: :make-reader of protocol: #'clojure.java.io/IOFactory found for class: nil
at clojure.core$_cache_protocol_fn.invoke(core_deftype.clj:541)
at clojure.java.io$fn__8551$G__8546__8558.invoke(io.clj:73)
at clojure.java.io$reader.doInvoke(io.clj:106)
at clojure.lang.RestFn.invoke(RestFn.java:410)
at clojure.lang.AFn.applyToHelper(AFn.java:161)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$slurp.doInvoke(core.clj:6278)
at clojure.lang.RestFn.invoke(RestFn.java:410)
at opsagent.backups.pit$read_properties.invoke(pit.clj:68)
at opsagent.backups.pit$enabled_QMARK_.invoke(pit.clj:106)
at clojure.core$eval37.invoke(NO_SOURCE_FILE:107)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6609)
at clojure.lang.Compiler.eval(Compiler.java:6582)
at clojure.core$eval.invoke(core.clj:2852)
at opsagent.opsagent$post_interface_startup.doInvoke(opsagent.clj:102)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at opsagent.conf$handle_new_conf.invoke(conf.clj:198)
at opsagent.messaging$message_callback$fn__12316.invoke(messaging.clj:52)
at opsagent.messaging.proxy$java.lang.Object$StompConnection$Listener$7f16bc72.onMessage(Unknown Source)
at org.jgroups.client.StompConnection.notifyListeners(StompConnection.java:324)
at org.jgroups.client.StompConnection.run(StompConnection.java:274)
at java.lang.Thread.run(Thread.java:745)
And cluster creation failed. Also i get this error during startup. I tried reinstall agent but in won't help
DataStax Agent version: 5.1.0
OpsCenter version 5.1.0
root#node1:~# java -version
java version "1.7.0_75"
OpenJDK Runtime Environment (IcedTea 2.5.4) (7u75-2.5.4-1~deb7u1)
OpenJDK 64-Bit Server VM (build 24.75-b04, mixed mode)
root#node1:~#
Content of address.yaml
stomp_interface: "YYY.YYY.YYY.YYY"
Content of opscenterd.conf
[webserver]
port = 8888
interface = 0.0.0.0
use_ssl = false
[logging]
level = INFO
<cluster name>.conf is absent, because cluster not added
The problem the agent is having is finding your installation of DSE on that node. When it can't find DSE it can't get the archiving properties file to update and errors out.
This error message is unfortunately terribly unhelpful. I've created a ticket to fix the error message (it's unfortunately private, but you can use this ticket number when discussing the issue with DataStax: OPSC-4826)
For a work around, try setting cassandra_install_location in your address.yaml file on that node. After adjusting address.yaml please bounce the agent and you can retry that operation.
You can find a document listing this and more address.yaml config items here: http://www.datastax.com/documentation/opscenter/5.1/opsc/configure/agentAddressConfiguration.html
I think the issue will be with your Java installation. I believe you'll need Oracle Java, not OpenJDK.
This worked for me:
ubuntu:~$ sudo add-apt-repository ppa:webupd8team/java
ubuntu:~$ sudo apt-get update && sudo apt-get install oracle-java7-installer oracle-java7-set-default

Resources