We are running spark job which connect to oracle and fetch some data. Always attempt 0 or 1 of JDBCRDD task fails with below error. In subsequent attempt task get completed. As suggested in few portal we even tried with -Djava.security.egd=file:///dev/urandom java option but it didn't solved the problem. Can someone please help us in fixing this issue.
ava.sql.SQLRecoverableException: IO Error: Connection reset by peer, Authentication lapse 59937 ms.
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:794)
at oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:688)
Issue was with java.security.egd only. Setting it through command line i.e -Djava.security.egd=file:///dev/urandom was not working so I set it through system.setproperty with in job. After that job is no more giving SQLRecoverableException
This Exception nothing to do with Apache Spark ,"SQLRecoverableException: IO Error:" is simply the Oracle JDBC driver reporting that it's connection
to the DBMS was closed out from under it while in use. The real porblem is at
the DBMS, such as if the session died abruptly. Please check DBMS
error log and share with question.
Similer problem you can find here
https://access.redhat.com/solutions/28436
Fastest way is export spark system variable SPARK_SUBMIT_OPTS before running your job.
like this: export SPARK_SUBMIT_OPTS=-Djava.security.egd=file:dev/urandom I'm using docker, so for me full command is:
docker exec -it spark-master
bash -c "export SPARK_SUBMIT_OPTS=-Djava.security.egd=file:dev/urandom &&
/spark/bin/spark-submit --verbose --master spark://172.16.9.213:7077 /scala/sparkjob/target/scala-2.11/sparkjob-assembly-0.1.jar"
export variable
submit job
Related
I am getting this error, how can I run these Spark jobs (in Scala)?
Command:
bin/run-example /home/datadotz/streaming/wc_str.scala localhost 9999
Error:
Failed to load org.apache.spark.examples./home/datadotz/streami
java.lang.ClassNotFoundException: org.apache.spark.examples.
Start with the documentation -- https://spark.apache.org/docs/latest/#running-the-examples-and-shell
To run one of the Java or Scala sample programs, use bin/run-example [params] in the top-level Spark directory
It also mentions you can use spark-submit to run programs, which seems to take a path. Try that script instead.
Spark job executed by Dataproc cluster on Google Cloud gets stuck on a task PythonRDD.scala:446
The error log says Could not find valid SPARK_HOME while searching ... paths under /hadoop/yarn/nm-local-dir/usercache/root/
The thing is, SPARK_HOME should be set by default on a dataproc cluster.
Other spark jobs that don't use RDDs work just fine.
During the initialization of the cluster I do not reinstall spark (but I have tried to, which I previously thought caused the issue).
I also found out that all my executors were removed after a minute of running the task.
And yes, I have tried to run the following initialization action and it didn't help:
#!/bin/bash
cat << EOF | tee -a /etc/profile.d/custom_env.sh /etc/*bashrc >/dev/null
export SPARK_HOME=/usr/lib/spark/
EOF
Any help?
I was using a custom mapping function. When I put the function to a separate file the problem disappeared.
I'm using spark in HDInsight with Jupyter notebook. I'm using the %%configure "magic" to import packages. Every time there is a problem with the package, spark crashes with the error:
The code failed because of a fatal error: Status 'shutting_down' not
supported by session..
or
The code failed because of a fatal error: Session 28 unexpectedly
reached final status 'dead'. See logs:
Usually the problem was with me mistyping the name of the package, so after a few attempts I could solve it. Now I'm trying to import spark-streaming-eventhubs_2.11 and I think I got the name right, but I still receive the error. I looked at all kinds of logs but still couldn't find the one which shows any relevant info. Any idea how to troubleshoot similar errors?
%%configure -f
{ "conf": {"spark.jars.packages": "com.microsoft.azure:spark-streaming-eventhubs_2.11:2.0.5" }}
Additional info: when I run
spark-shell --conf spark.jars.packages=com.microsoft.azure:spark-streaming-eventhubs_2.11:2.0.5
The shell starts fine, and downloads the package
I finally was able to find the log files which contain the error. There are two log files which could be interesting
Livy log: livy-livy-server.out
Yarn log
On my HDInsight cluster, I found the livy log by connecting to one of the Head nodes with SSH and downloading a a file at this path (this log didn't contain useful info):
/var/log/livy/livy-livy-server.out
The actual error was in the yarn log file accessible from YarnUI. In HDInsight Azure Portal, go to "Cluster dashboard" -> "Yarn", find your session (KILLED status), click on "Logs" in the table, find "Log Type: stderr", click "click here for full log".
The problem in my case was Scala version incompatibility between one of the dependencies of spark-streaming_2.11 and Livy. This is supposed to be fixed Livy 0.4. More info here
I have been following this tutorial in order to set up Zeppelin on a Spark cluster (version 1.5.2) in HDInsight, on Linux. Everything worked fine, I have managed to successfully connect to the Zeppelin notebook through the SSH tunnel. However, when I try to run any kind of paragraph, the first time I get the following error:
java.io.IOException: No FileSystem for scheme: wasb
After getting this error, if I try to rerun the paragraph, I get another error:
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
These errors occur regardless of the code I enter, even if there is no reference to the hdfs. What I'm saying is that I get the "No FileSystem" error even for a trivial scala expression, such as parallelize.
Is there a missing configuration step?
I am download the tar ball that the script that you pointed to as I type. But want I am guessing is that your zeppelin install and spark install are not complete to work with wasb. In order to get spark to work with wasb you need to add some jars to the Class path. To do this you need to add something like this to your spark-defaults.conf (the paths might be different in HDInsights, this is from HDP on IaaS)
spark.driver.extraClassPath /usr/hdp/2.3.0.0-2557/hadoop/lib/azure-storage-2.2.0.jar:/usr/hdp/2.3.0.0-2557/hadoop/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.3.0.0-2557/hadoop/hadoop-azure-2.7.1.2.3.0.0-2557.jar
spark.executor.extraClassPath /usr/hdp/2.3.0.0-2557/hadoop/lib/azure-storage-2.2.0.jar:/usr/hdp/2.3.0.0-2557/hadoop/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.3.0.0-2557/hadoop/hadoop-azure-2.7.1.2.3.0.0-2557.jar
Once you have spark working with wasb, or next step is make those sames jar in zeppelin class path. A good way to test your setup is make a notebook that prints your env vars and class path.
sys.env.foreach(println(_))
val cl = ClassLoader.getSystemClassLoader
cl.asInstanceOf[java.net.URLClassLoader].getURLs.foreach(println)
Also looking at the install script, it trying to pull the zeppelin jar from wasb, you might want to change that config to somewhere else while you try some of these changes out. (zeppelin.sh)
export SPARK_YARN_JAR=wasb:///apps/zeppelin/zeppelin-spark-0.5.5-SNAPSHOT.jar
I hope this helps, if you are still have problems I have some other ideas, but would start with these first.
I have managed to set up Cassandra + Thrift and the Python wrapper for Thrift LazyBoy, and I have followed an example mentioned in the LazyBoy Wiki.After testing that example I'm getting an error with an exception.
cassandra.ttypes.InvalidRequestException: InvalidRequestException(why='Keyspace
UserData does not exist in this schema.')
here's the exception.I'm expecting some helping hand.
Thanks.
Make sure that the keyspace 'UserData' exists in your configuration file (conf/storage-conf.xml)
E.g
<Keyspaces>
<Keyspace Name="UserData">
....
</Keyspaces>
For those just starting out with Cassandra/Pycassa then maybe you've been working through this tutorial and you get stuck on the line
col_fam = pycassa.ColumnFamily(pool, 'Standard1')
with an error that looks like
pycassa.cassandra.ttypes.InvalidRequestException: InvalidRequestException(why='Keyspace Keyspace1 does not exist')
To resolve this, start Cassandra
bin/cassandra -f
And then in another terminal window load the sample schema using
bin/cassandra-cli -host localhost --file conf/schema-sample.txt
Then you should make it past that line in the tutorial.