ClassNotFoundException for parquet on CentOS 7.3 - apache-spark

I have been using spark-1.5.2 built with hadoop-1.0.4 along with spark-csv_2.10-1.4.0 and commons-csv:1.1(for reading data). I run naivebayes/randomforest algorithms in CentOS 6.7 and it is working fine. When I upgraded the OS to 7.3, I get the following exception :
Caused by: java.lang.ClassNotFoundException: Failed to load class for data source: parquet.
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:67)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:167)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:146)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:137)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:304)
at org.apache.spark.mllib.classification.NaiveBayesModel$SaveLoadV2_0$.save(NaiveBayes.scala:206)
at org.apache.spark.mllib.classification.NaiveBayesModel.save(NaiveBayes.scala:169)
at com.zlabs.ml.core.algo.classification.naivebayes.NaiveBayesClassificationImpl.createModel(NaiveBayesClassificationImpl.java:111)
... 10 more
This exception happens when I save the model. Training is completed successfully.
Is there anyone else facing this issue ? Are there any OS level package dependencies for spark?

Related

Azure Databricks - Library Installation Fails

I am running a job from a jar file in Azure Databricks. This jar has a dependency on azure-storage-file-share. Previously, there wasn't an issue installing this dependency using Maven within the Databricks UI. Now, I get failure with this error message:
Run result unavailable: job failed with error message Library installation failed for library due to user error for maven { coordinates: "com.azure:azure-storage-file-share:12.16.2" } Error messages: Library installation attempted on the driver node of cluster 0131-144423-r1g81134 and failed. Please refer to the following error message to fix the library or contact Databricks support. Error Code: DRIVER_LIBRARY_INSTALLATION_FAILURE. Error Message: java.util.concurrent.ExecutionException: java.io.FileNotFoundException: File file:/local_disk0/tmp/clusterWideResolutionDir/maven/ivy/jars/io.netty_netty-transport-native-kqueue-4.1.86.Final.jar does not exist
To try to work around this, I manually installed this netty library (and several others) as well. I can see in the logs that it was able to download the jar successfully:
downloading https://maven-central.storage-download.googleapis.com/maven2/io/netty/netty-transport-native-kqueue/4.1.86.Final/netty-transport-native-kqueue-4.1.86.Final-osx-x86_64.jar ... [SUCCESSFUL ] io.netty#netty-transport-native-kqueue;4.1.86.Final!netty-transport-native-kqueue.jar (202ms)
However, it still fails. This is error message is in the same log after the one above:
23/01/31 14:50:09 WARN LibraryState: [Thread 137] Failed to install library maven;azure-storage-file-share;com.azure;12.16.2;; java.util.concurrent.ExecutionException: java.io.FileNotFoundException: File file:/local_disk0/tmp/clusterWideResolutionDir/maven/ivy/jars/io.netty_netty-transport-native-kqueue-4.1.86.Final.jar does not exist

Unable to save model in Apache Spark -- Py4JJavaError

We're getting an error while trying to save a model. model.save('DT')
Py4JJavaError: An error occurred while calling o822.save.
: org.apache.spark.SparkException: Job aborted.```
Complete Error Stack --> http://dpaste.com/16Y07B9
Anything we missed here? It is creating the folder but not writing anything.
OS: Windows 10
TIA
So it turns out I was using Spark 3.0.0Preview and ran into trouble. Switched to 2.4.5 and resolved it.

CDIIntegrationService errors were found when Weblogic upgrade from 12.1.2 to 12.1.3

I have upgraded Weblogic version in linux server by changing the wl_home path in the setDomainEnv.sh for 12.1.2 to 12.1.3 and restart. when restarting it gives below errors.
Appreciate if anyone can give idea about this.
java.lang.IllegalAccessError: tried to access method com.bea.logging.LogBufferHandler.bufferLogObject(Ljava/lang/Object;)V from class weblogic.logging.log4j.WLLog4jMemoryBufferAppender
java.lang.IllegalStateException: Unable to perform operation: post construct on weblogic.diagnostics.lifecycle.LoggingServerService
java.lang.IllegalArgumentException: While attempting to resolve the dependencies of weblogic.diagnostics.lifecycle.DiagnosticFoundationService errors were found
java.lang.IllegalStateException: Unable to perform operation: resolve on weblogic.diagnostics.lifecycle.DiagnosticFoundationService
java.lang.IllegalArgumentException: While attempting to resolve the dependencies of com.oracle.injection.integration.CDIIntegrationService errors were found
java.lang.IllegalStateException: Unable to perform operation: resolve on com.oracle.injection.integration.CDIIntegrationService
Use the wllog4j.jar which is part of WLS 12.1.3 build. The one part of WLS_HOME/server/lib diretory.
If your domains-home/lib folder holds older wllog4.jar which was part of 12.1.2 then you will face this issue

Spark JobServer NullPointerException

I'm trying to start a spark jobserver, here are the steps I'm following:
I configure the local.sh based on the template.
Then I run ./bin/server_deploy.sh and it finishes without any error.
Configure local.conf.
Run ./bin/server_start.sh in the deploy server.
But when I do the last step I get the following error:
Error: Exception thrown by the agent : java.lang.NullPointerException
Note: I'm using spark 1.4.1. I'm using version 0.5.2 from jobserver (https://github.com/spark-jobserver/spark-jobserver/tree/v0.5.2)
Any idea in how I can fix this (or at least debug it).
Thanks
The error log does not provide much information.
I encountered the same error. For my case, I had another instance of the JobServer running (and somehow ./bin/server_stop.sh did not catch it). It works after I manually killed the other process.
Hint : Error: Exception thrown by the agent : java.lang.NullPointerException when starting Java application

Cassandra 2.1.8: Node refuses to start with NPE in removeUnfinishedCompactionLeftovers

I am trying to upgrade a Cassandra 2.1.0 cluster to 2.1.8 (latest release).
When I start a first node with 2.1.8 runtime, I get an error and the node refuses to start.
This is the error's stack trace :
org.apache.cassandra.io.FSReadError: java.lang.NullPointerException
at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:642) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:302) [apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:524) [apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:613) [apache-cassandra-2.1.8.jar:2.1.8]
Caused by: java.lang.NullPointerException: null
at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:634) ~[apache-cassandra-2.1.8.jar:2.1.8]
... 3 common frames omitted
FSReadError in Failed to remove unfinished compaction leftovers (file: /home/nudgeca2/datas/data/main/segment-97b5ba00571011e49a928bffe429b6b5/main-segment-ka-15432-Statistics.db). See log for details.
at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:642)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:302)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:524)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:613)
Caused by: java.lang.NullPointerException
at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:634)
... 3 more
Exception encountered during startup: java.lang.NullPointerException
The cluster has 7 nodes and it turns on AWS Linux EC2 instances.
The node I try to upgrade was stopped after a nodetool drain.
Then I tried to come back to 2.1.0 runtime but I now get a similar error.
I also tried to stop and start another node and everything was ok, the node restarted without any problem.
I tried to touch the missing file (as it should be removed, I thought it would perhaps not need a specific content). I had two other files with the same error that I also touched. And finally the node fails further while trying to read these files.
Anyone has any idea what I should do ?
Thank you for any help.
It might be worth opening a Jira for that issue, so if nothing else, they can catch the NPE and provide a better error message.
It looks like it's trying to open:
file: /home/nudgeca2/datas/data/main/segment-97b5ba00571011e49a928bffe429b6b5/main-segment-ka-15432-Statistics.db
It's possible that it's trying to read that file because it finds the associated data file: (/home/nudgeca2/datas/data/main/segment-97b5ba00571011e49a928bffe429b6b5/main-segment-ka-15432-Data.db). Does that data file exist? I'd be tempted to move it out of the way, and see if it starts properly.

Resources