I want to copy some property files to master and workers while submitting spark job,
so as stated in the doc I am using --files to copy the files on executors working directory.
but below command is not copying anything in executors working directory. So anybody have idea please share.
gcloud dataproc jobs submit spark --cluster=cluster-name --class=dataproc.codelab.word_count.WordCount --jars=gs://my.jar --region=us-central1 --files=gs://my.properties -- gs://my/input/ gs://my/output3/
According to official Spark documentation, when Spark is running on Yarn, the Spark executor will use local directory configured for Yarn as working directory, which is by default - /hadoop/yarn/nm-local-dir/usercache/{userName}/appcache/{applicationId}.
So based on you description if it dose show up there then it's working as expected.
Related
I have deployed the spark cluster and submit the spark processing using yarn resource manager. My cluster is working fine with hdfs (hadoop) setting, but facing problem regarding spark staging when I run the spark-submit, below the screen of errors where I am stuck.
enter image description here
From where spark execute this jar file, I also add the jar in spark jars directory.
I am using Apache Spark 2.4 and submitting 3K-4K Spark jobs daily on YARN resource manager using 'Cluster mode'.
Problem: Every time when I submit a job via spark-submit it creates temporary folders in /tmp (on edge/gateway node), since cluster is multi-tenant (other teams also doing the same on gateway node) /tmp folder is keep getting 100% full and am not able to submit Spark jobs.
I have tried to run jobs with below config but still spark-submit is creating one folder in /tmp and remaining folders in /ops/data/tmp
spark.local.dir=/ops/data/tmp
Could you please advise how to completely avoid the usage of /tmp folder on edge node. thank you
I am trying to execute Spark jar on Dataproc using Airflow's DataProcSparkOperator. The jar is located on GCS, and I am creating Dataproc cluster on the fly and then executing this jar on the newly created Dataproc cluster.
I am able to execute this with DataProcSparkOperator of Airflow with default settings, but I am not able to configure Spark job properties (e.g. --master, --deploy-mode, --driver-memory etc.).
From documentation of airflow didn't got any help. Also tried many things but didn't worked out.
Help is appreciated.
To configure Spark job through DataProcSparkOperator you need to use dataproc_spark_properties parameter.
For example, you can set deployMode like this:
DataProcSparkOperator(
dataproc_spark_properties={ 'spark.submit.deployMode': 'cluster' })
In this answer you can find more details.
I just created a Google Cloud cluster (1 master and 6 workers) and by default Spark is configured.
I have a pure python code that uses NLTK to build the dependency tree for each line from a text file. When I run this code on the master spark-submit run.py I get the same execution time when I run it using my machine.
How to make sure that the master is using the workers in order to reduce the execution time ?
You can check the spark UI. If its running on top of yarn, please open the yarn UI and click on your application id which will open the spark UI. Check under the executors tab it will have the node ip address also.
could you please share your spark submit config.
Your command 'spark-submit run.py' doesn't seem to send your job to YARN. To do such thing, you need to add the --master parameter. For example, a valid command to execute a job in YARN is:
./bin/spark-submit --master yarn python/pi.py 1000
If you execute your job from the master, this execution will be straightforward. Anyway, check this link for another parameter that spark-submit accept.
For a Dataproc cluster (Hadoop Google cluster) you have two options to check the job history including the ones that are running:
By command line from the master: yarn application -list, this option sometimes needs additional configuration. If you have troubles, this link will be useful.
By UI. Dataproc enables you to access the Spark Web UI, it improves monitoring tasks. Check this link to learn how to access the Spark UI and other Dataproc UIs. In summary, you have to create a tunnel and configure your browser to use socks proxy.
Hope the information above help you.
I am new to HDInsight Spark, I am trying to run a use-case to learn how things work in Azure Spark cluster. This is what I have done so far.
Able to create azure spark cluster.
Create jar by following steps as described in the link: create standalone scala application to run on HDInsight Spark cluster. I have used the same scala code as given in the link.
ssh into head node
upload jar to the blob storage using link: using azure CLI with azure storage
copy zip to machine
hadoop fs -copyToLocal
I have checked that the jar gets uploaded to the headnode(machine).
I want to run that jar and get the results as stated in the link given in
point 2 above.
What will be the next step? How can I submit spark job and get results using command line interface?
For example considering you are created jar for program submit.jar. In order to submit this to your cluster with dependency you can use below syntax.
spark-submit --master yarn --deploy-mode cluster --packages "com.microsoft.azure:azure-eventhubs-spark_2.11:2.2.5" --class com.ex.abc.MainMethod "wasb://space-hdfs#yourblob.blob.core.windows.net/xx/xx/submit.jar" "param1.json" "param2"
Here --packages :is to include dependency to you program, you can use --jars option and then followed by jar path. --jars "path/to/dependency/abc.jar"
--class : Main method of your program
after that specify path for your program jar.
you can pass parameters with if you needed as shown above
A couple of options on submitting spark jars:
1) If you want to submit the job on the headnode already, you can use spark-submit
See Apache submit jar documentation
2) An easier alternative is to submit spark jar via livy after uploading the jar to wasb storage.
See submit via livy doc. You can skip step 5 if you do it this way.