i've just registrated a free instance of "Analitycs for Apache Spark" and followed this tutorial to use spark submit ibm ad hoc designed script to run an app from my local machine on bluemix cloud cluster. The issue is the following: i've made everithing that was described in the tutorial and lunched this script
./spark-submit.sh --vcap ./vcap.json --deploy-mode cluster --master
https://spark.eu-gb.bluemix.net --files /home/vito/vinorosso2.csv
--conf spark.service.spark_version=2.2.0
/home/vito/workspace_2/sbt-esempi/target/scala-2.11/isolationF3.jar
--class progettoSisDis.MasterNode
everything proceed fine (dataset vinorosso2.csv and my fatJar are correctly uploaded) until the terminal sais :" submission complete" at this point when i go to the log file created by the script there was this error message :
Submit job result: Invalid plan and spark version combination in HTTP request (ibm.SparkService.PayGoPersonal, 2.0.0)
Submission ID:
ERROR: Problem submitting job. Exit
So, it wasn't enough to register a free instance of Analitycs for apache spark to submit a spark job? Hope someone can help. By the way, if it helps, on my local machine i'm using spark with intellij idea (scala). Byyye
From https://console.bluemix.net/docs/services/AnalyticsforApacheSpark/index-gentopic2.html#using_spark-submit you need to be using Spark version 1.6.x or 2.0.x. Your submit job is set to version 2.2.0. Try submitting using spark.service.spark_version=2.0.0 (assuming your code will work with this version of Spark).
Related
I'm getting error while I'm trying to configure spark with mongodb in my EMR instance. Below is the command -
spark-shell --conf "spark.mongodb.output.uri=mongodb://admin123:Vibhuti21!#docdb-2021-09-18-15-29-54.cluster-c4paykiwnh4d.us-east-1.docdb.amazonaws.com:27017/?replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false" "spark.mongodb.output.collection="ecommerceCluster" --packages org.mongodb.spark:mongo-spark-connector_2.11:2.4.3
I'm a beginner in Spark & AWS. Can anyone please help?
DocumentDB requires a CA bundle to be installed on each node where your spark executors will launch. As such you firstly need to install the CA certs on each instance, AWS has a guide under the JAVA section for this in two bash scripts which makes things easier.1
Once these certs are installed, your spark command needs to reference the truststores and its passwords using the configuration parameters you can pass to Spark. Here is an example that I ran and this worked fine.
spark-submit
--packages org.mongodb.spark:mongo-spark-connector_2.11:2.4.3
--conf "spark.executor.extraJavaOptions=
-Djavax.net.ssl.trustStore=/tmp/certs/rds-truststore.jks
-Djavax.net.ssl.trustStorePassword=<yourpassword>" pytest.py
you can provide those same configuration options in both spark-shell as well.
One thing i did find tricky, was that the mongo spark connector doesnt appear to know the ssl_ca_certs parameter in the connection string, so i removed this to avoid warnings from Spark as the Spark executors would reference the keystore in the configuration anyway.
Can any one please let me know how to submit spark Job from locally and connect to Cassandra cluster.
Currently I am submitting the Spark job after I login to Cassandra node through putty and submit the below dse-spark-submit Job command.
Command:
dse spark-submit --class ***** --total-executor-cores 6 --executor-memory 2G **/**/**.jar --config-file build/job.conf --args
With the above command, my spark Job able to connect to cluster and its executing, but sometimes facing issues.
So I want to submit spark job from my local machine. Can any one please guide me how to do this.
There are several things you could mean by "run my job locally"
Here are a few of my interpretations
Run the Spark Driver on a Local Machine but access a remote Cluster's resources
I would not recommend this for a few reasons, the biggest being that all of your job management will still be handled between your remote machine and the executors in the cluster. This would be equivalent of having a Hadoop Job Tracker running in a different cluster than the rest of the Hadoop distribution.
To accomplish this though you need to run a spark submit with a specific master uri. Additionally you would need to specify a Cassandra node via spark.cassandra.connection.host
dse spark-submit --master spark://sparkmasterip:7077 --conf spark.cassandra.connection.host aCassandraNode --flags jar
It is important that you keep the jar LAST. All arguments after the jar are interpreted as arguments for the application and not spark-submit parameters.
Run Spark Submit on a local Machine but have the Driver run in the Cluster (Cluster Mode)
Cluster mode means your local machine sends the jar and environment string over to the Spark Master. The Spark Master then chooses a worker to actually run the driver and the driver is started as a separate JVM by the worker. This is triggered using the --deploy-mode cluster flag. In addition to specifying the Master and Cassandra Connection Host.
dse spark-submit --master spark://sparkmasterip:7077 --deploy-mode cluster --conf spark.cassandra.connection.host aCassandraNode --flags jar
Run the Spark Driver in Local Mode
Finally there exists a Local mode for Spark which starts the entire Spark Framework in a single JVM. This is mainly used for testing. Local mode is activated by passing `--master local``
For more information check out the Spark Documentation on submitting applications
http://spark.apache.org/docs/latest/submitting-applications.html
I am new to HDInsight Spark, I am trying to run a use-case to learn how things work in Azure Spark cluster. This is what I have done so far.
Able to create azure spark cluster.
Create jar by following steps as described in the link: create standalone scala application to run on HDInsight Spark cluster. I have used the same scala code as given in the link.
ssh into head node
upload jar to the blob storage using link: using azure CLI with azure storage
copy zip to machine
hadoop fs -copyToLocal
I have checked that the jar gets uploaded to the headnode(machine).
I want to run that jar and get the results as stated in the link given in
point 2 above.
What will be the next step? How can I submit spark job and get results using command line interface?
For example considering you are created jar for program submit.jar. In order to submit this to your cluster with dependency you can use below syntax.
spark-submit --master yarn --deploy-mode cluster --packages "com.microsoft.azure:azure-eventhubs-spark_2.11:2.2.5" --class com.ex.abc.MainMethod "wasb://space-hdfs#yourblob.blob.core.windows.net/xx/xx/submit.jar" "param1.json" "param2"
Here --packages :is to include dependency to you program, you can use --jars option and then followed by jar path. --jars "path/to/dependency/abc.jar"
--class : Main method of your program
after that specify path for your program jar.
you can pass parameters with if you needed as shown above
A couple of options on submitting spark jars:
1) If you want to submit the job on the headnode already, you can use spark-submit
See Apache submit jar documentation
2) An easier alternative is to submit spark jar via livy after uploading the jar to wasb storage.
See submit via livy doc. You can skip step 5 if you do it this way.
I am trying to access the Application Detail UI of a job submitted to Spark Stand Alone cluster (v 1.4). I submitted using the following command:
./spark-submit --master spark://MASTER:7077 --deploy-mode cluster ....
Seems like the link to "Application Detail UI" (port:4040) is broken. Everything works if I submit locally (remove --deploy-mode cluster).
Is there a workaround? Thanks in advance.
There should be a URL for the application UI printed in the log file. Search for the words "started SparkUI at" and you should find it.
I'm using Cloudera Quickstart VM 5.3.0 (running in Virtual Box 4.3 on Windows 7) and I wanted to learn Spark (on YARN).
I started Cloudera Manager. In the sidebar I can see all the services, there is Spark but in standalone mode. So I click on "Add a new service", select "Spark". Then I have to select the set of dependencies for this service, I have no choices I must pick HDFS/YARN/zookeeper.
Next step I have to choose a History Server and a Gateway, I run the VM in local mode so I can only choose localhost.
I click on "Continue" and this error occures (+ 69 traces) :
A server error as occurred. Send the following information to
Cloudera.
Path : http://localhost:7180/cmf/clusters/1/add-service/reviewConfig
Version: Cloudera Express 5.3.0 (#155 built by jenkins on
20141216-1458 git: e9aae1d1d1ce2982d812b22bd1c29ff7af355226)
org.springframework.web.bind.MissingServletRequestParameterException:Required
long parameter 'serviceId' is not present at
AnnotationMethodHandlerAdapter.java line 738 in
org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter$ServletHandlerMethodInvoker
raiseMissingParameterException()
I don't know if an internet connection is needed but I precise that I can't connect to the internet with the VM. (EDIT : Even with an internet connection I get the same error)
I have no ideas how to add this service, I tried with or without gateway, many network options but it never worked. I checked the known issues; nothing...
Someone knows how I can solve this error or how I can work around ? Thanks for any help.
Julien,
Before I answer your question I'd like to make some general notes about Spark in Cloudera Distribution of Hadoop 5 (CDH5):
Spark runs in three different formats: (1) local, (2) Spark's own stand-alone manager, and (3) other cluster resource managers like Hadoop YARN, Apache Mesos, and Amazon EC2.
Spark works out-of-the-box with CHD 5 for (1) and (2). You can initiate a local
interactive spark session in Scala using the spark-shell command
or pyspark for Python without passing any arguments. I find the interactive Scala and Python
interpreters help learning to program with Resilient Distributed
Datasets (RDDs).
I was able to recreate your error on my CDH 5.3.x distribution. I didn't mean to take credit for the bug you discovered, but I posted to the Cloudera developer community for feedback.
In order to use Spark in the QuickStart pseudo-distributed environment, see if all of the Spark daemons are running using the following command (you can do this inside the Cloudera Manager (CM) UI):
[cloudera#quickstart simplesparkapp]$ sudo service --status-all | grep -i spark
Spark history-server is not running [FAILED]
Spark master is not running [FAILED]
Spark worker is not running [FAILED]
I've manually stopped all of the stand-alone Spark services so we can try to submit the Spark job within Yarn.
In order to run Spark inside a Yarn container on the quick start cluster, we have to do the following:
Set the HADOOP_CONF_DIR to the root of the directory containing the yarn-site.xml configuration file. This is typically /etc/hadoop/conf in CHD5. You can set this variable using the command export HADOOP_CONF_DIR="/etc/hadoop/conf".
Submit the job using spark-submit and specify you are using Hadoop YARN.
spark-submit --class CLASS_PATH --master yarn JAR_DIR ARGS
Check the job status in Hue and compare to the Spark History server. Hue should show the job placed in a generic Yarn container and Spark History should not have a record of the submitted job.
References used:
Learning Spark, Chapter 7
Sandy Ryza's Blog Post on Spark and CDH5
Spark Documentation for Running on Yarn