Exiting Java gateway on Azure HDInsight with PySpark - azure

i am using Azure HDInsight and PySpark.
Now a previously working snippet fails with the exception
"Java gateway process exited before sending the driver its port number".
The pyspark source contains at that point the comment "In Windows, ensure the Java child processes do not linger after Python has exited.".
Even restarting the HDInsight instance doesn't fixes that issue.
Has anybody else of you an idea how to fix it?

I ran into the same problem, I logged into my HDInsight cluster via RDP and restarted the IPython service. This seems to have fixed the issue.

Related

Spark Session gets killed after few minutes

I am using a GCP VM and have installed ubuntu on it, I also have Hadoop, yarn, and spark services installed on it. Now the issue is, my spark session gets killed after some time though all the jps are running. Attaching a screenshot for your reference. Can someone please help me with this?

Spark jobs not showing up in Hadoop UI in Google Cloud

I created a cluster in Google Cloud and submitted a spark job. Then I connected to the UI following these instructions: I created an ssh tunnel and used it to open the Hadoop web interface. But the job is not showing up.
Some extra information:
If I connect to the master node of the cluster via ssh and run spark-shell, this "job" does show up in the hadoop web interface.
I'm pretty sure I did this before and I could see my jobs (both running and already finished). I don't know what happened in between for them to stop appearing.
The problem was that I was running my jobs in local mode. My code had a .master("local[*]") that was causing this. After removing it, the jobs showed up in the Hadoop UI as before.

Azure HDInsights Spark Cluster Install External Libraries

I have a HDInsights Spark Cluster. I installed tensorflow using a script action. The installation went fine (Success).
But now when I go and create a Jupyter notebook, I get:
import tensorflow
Starting Spark application
The code failed because of a fatal error:
Session 8 unexpectedly reached final status 'dead'. See logs:
YARN Diagnostics:
Application killed by user..
Some things to try:
a) Make sure Spark has enough available resources for Jupyter to create a Spark context. For instructions on how to assign resources see http://go.microsoft.com/fwlink/?LinkId=717038
b) Contact your cluster administrator to make sure the Spark magics library is configured correctly.
I don't know how to fix this error... I tried some things like looking at logs but they are not helping.
I just want to connect to my data and train a model using tensorflow.
This looks like error with Spark application resources. Check resources available on your cluster and close any applications that you don't need. Please see more details here: https://learn.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-resource-manager#kill-running-applications

How to run PySpark (possibly in client mode) on Mesosphere cluster?

I am trying to run a PySpark job on a Mesosphere cluster but I cannot seem to get it to run. I understand that Mesos does not support cluster deploy mode for PySpark applications and that it needs to be run in client mode. I believe this is where the problem lies.
When I try submitting a PySpark job I am getting the output below.
... socket.hpp:107] Shutdown failed on fd=48: Transport endpoint is not connected [107]
I believe that a spark job running in client mode needs to connect to the nodes directly and this is being blocked?
What configuration would I need to change to be able to run a PySpark job in client mode?
When running PySpark in client mode (meaning the driver is running where you invoke Python) the driver becomes the Mesos Framework. When this happens, the host the framework is running on needs to be able to connect to all nodes in the cluster, and they need to be able to connect back, meaning no NAT.
If this is indeed the cause of your problems, there are two environment variables that might be useful. If you can get a VPN in place, you can set LIBPROCESS_IP and SPARK_LOCAL_IP both to the IP of the host machine that cluster nodes can use to connect back to the driver.

HDInsight word count map reduce program stuck at mapper 100% and reducer 0%

I am new to Hadoop and I have a very similar problem as posted here. Only thing is OP runs hadoop on linux where as I am running it on Windows.
I have installed Hadoop Azure HDInsight Emulator on my local machine. When I run a simple word count program. Mapper jobs runs perfectly 100% but Reduce job gets stuck at 0%.
I tried debugging it as suggested by Chris (In response to this que) and found the problem with hostname on which reducer jobs run (which was the exact problem as of OP)
Reduce is not running on localhost instead it runs on some hostname 192.168.17.213 which is not getting resolved and reducer can not progress from there.
These are error logs
copy failed: attempt_201402111921_0017_m_000000_0 from 192.168.17.213
2014-02-12 01:51:53,073 WARN org.apache.hadoop.mapred.ReduceTask:
java.net.ConnectException: Connection timed out: connect
OP got that issue resolved by chaning \etc\hosts file setting to localhost.
But that seems to be a linux config.. How do I set my hostname to localhost in my Hadoop Azure HDInsight Emulator?
There is an article showing you how to run the word counting MapReduce program on HDInsight emulator. The article is Get started with HDInsight emulator located at http://www.windowsazure.com/en-us/documentation/articles/hdinsight-get-started-emulator/.

Resources