How to spark-submit a python file in spark 2.1.0? - apache-spark

I am currently running spark 2.1.0. I have worked most of the time in PYSPARK shell, but I need to spark-submit a python file(similar to spark-submit jar in java) . How do you do that in python?

pythonfile.py
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("appName").getOrCreate()
sc = spark.sparkContext
rdd = sc.parallelize([1,2,3,4,5,6,7])
print(rdd.count())
Run the above program with configurations you want : eg :
YOUR_SPARK_HOME/bin/spark-submit --master yourSparkMaster --num-executors 20 \
--executor-memory 1G --executor-cores 2 --driver-memory 1G \
pythonfile.py
These options are not mandatory. You can even run like
YOUR_SPARK_HOME/bin/spark-submit --master sparkMaster/local pythonfile.py

Related

Spark job stuck on one task only

I have setup Spark 3.x with Hadoop 3.x with YARN. I have to simply index some data using distributed data pipeline i.e., via Spark. Following is the code snippet that I have used for spark app (pyspark)
def index_module(row ):
pass
def start_job(DATABASE_PATH):
global SOLR_URI
warehouse_location = abspath('spark-warehouse')
spark = SparkSession \
.builder \
.appName("Python Spark SQL Hive integration example") \
.config("spark.sql.warehouse.dir", warehouse_location) \
.enableHiveSupport() \
.getOrCreate()
solr_client = pysolr.Solr(SOLR_URI)
df = spark.read.format("csv").option("quote", "\"").option("escape", "\\").option("header", "true").option(
"inferSchema", "true").load(DATABASE_PATH)
df.createOrReplaceTempView("abc")
df2 = spark.sql("select * from abc")
df2.toJSON().map(index_module).collect()
solr_client.commit()
if __name__ == '__main__':
try:
DATABASE_PATH = sys.argv[1].strip()
except:
print("Input file missing !!!", file=sys.stderr)
sys.exit()
start_job(DATABASE_PATH)
There are about 120 csv files and 200 Million records. Each of it should be indexed idealy. To run the job on YARN, I have run following command (according to my Hadoop resources)
spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --driver-memory 4g --executor-memory 2g --num-executors 5 --executor-cores 1 /PATH/myscript.py
Now, about 3 days has been passed. My job is running. Following are the status of executors as shown from YARN dashboard
As shown in the figures, for each executor, all tasks are completed, just one left. Why it is so ? It should also be completed. What is the problem with above all ? What should be the possible way to fix the issue ?

Spark-submit fails with return code 13 for example of wordCount

My spark-submit command is :
spark-submit --class com.sundogsoftware.spark.WordCountBetterDataset --master yarn --deploy-mode cluster SparkCourse.jar
And for defining the sparkSession, i use this :
val spark = SparkSession
.builder
.master("spark://youness:7077")
.appName("WordCount")
.getOrCreate()
but at the end, my job fails with return code 13.
You need to let the master unset in the code. It is preferable to set it later when you issue spark-submit (spark-submit --master yarn-client ...) and you are already doing that above. Just remove .master("spark://youness:7077") from your code.

Spark-submit throwing error from yarn-cluster mode

How do we pass a config file to executor when we submit a spark job on yarn-cluster?
If I change my below spark-submit command as --master yarn-client then it works fine , I get respective output
spark-submit\
--files /home/cloudera/conf/omega.config \
--class com.mdm.InitProcess \
--master yarn-cluster \
--num-executors 7 \
--executor-memory 1024M \
/home/cloudera/Omega.jar \
/home/cloudera/conf/omega.config
My Spark Code:
object InitProcess
{
def main(args: Array[String]): Unit = {
val config_loc = args(0)
val config = ConfigFactory.parseFile(new File(config_loc ))
val jobName =config.getString("job_name")
.....
}
}
I am getting the below error
17/04/05 12:01:39 ERROR yarn.ApplicationMaster: User class threw exception: com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'job_name'
com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'job_name'
Could someone help me on running this command in --master yarn-cluster ?
The different between yarn-client and yarn-cluster is that in yarn-client the driver location is on the machine running the spark-submit command.
In your case, the location of the config file is /home/cloudera/conf/omega.config which can be found when you running as yarn-client as the driver is running from the current machine which holds this full/path/to/file.
But can't be access in yarn-cluster mode, as the driver is running on other host, which doesn't holds this full/path/to/file.
I'd suggest execution the command in the following format:
spark-submit\
--master yarn-cluster \
--num-executors 7 \
--executor-memory 1024M \
--class com.mdm.InitProcess \
--files /home/cloudera/conf/omega.config \
--jar /home/cloudera/Omega.jar omega.config
Sending the config file using --files with its full-path-name, and providing it as parameter to the jar as it filename (not with a full path) as the file will be downloaded to unknown location on the workers.
In your code, you can use SparkFiles.get(filename) in order to get the actual full-path-name of the downloaded file on the worker
The change in your code should be something similar to:
val config_loc = SparkFiles.get(args(0))
SparkFiles docs
public class SparkFiles
Resolves paths to files added through SparkContext.addFile().

How to execute Spark programs with Dynamic Resource Allocation?

I am using spark-summit command for executing Spark jobs with parameters such as:
spark-submit --master yarn-cluster --driver-cores 2 \
--driver-memory 2G --num-executors 10 \
--executor-cores 5 --executor-memory 2G \
--class com.spark.sql.jdbc.SparkDFtoOracle2 \
Spark-hive-sql-Dataframe-0.0.1-SNAPSHOT-jar-with-dependencies.jar
Now i want to execute the same program using Spark's Dynamic Resource allocation. Could you please help with the usage of Dynamic Resource Allocation in executing Spark programs.
In Spark dynamic allocation spark.dynamicAllocation.enabled needs to be set to true because it's false by default.
This requires spark.shuffle.service.enabled to be set to true, as spark application is running on YARN. Check this link to start the shuffle service on each NodeManager in YARN.
The following configurations are also relevant:
spark.dynamicAllocation.minExecutors,
spark.dynamicAllocation.maxExecutors, and
spark.dynamicAllocation.initialExecutors
These options can be configured to Spark application in 3 ways
1. From Spark submit with --conf <prop_name>=<prop_value>
spark-submit --master yarn-cluster \
--driver-cores 2 \
--driver-memory 2G \
--num-executors 10 \
--executor-cores 5 \
--executor-memory 2G \
--conf spark.dynamicAllocation.minExecutors=5 \
--conf spark.dynamicAllocation.maxExecutors=30 \
--conf spark.dynamicAllocation.initialExecutors=10 \ # same as --num-executors 10
--class com.spark.sql.jdbc.SparkDFtoOracle2 \
Spark-hive-sql-Dataframe-0.0.1-SNAPSHOT-jar-with-dependencies.jar
2. Inside Spark program with SparkConf
Set the properties in SparkConf then create SparkSession or SparkContext with it
val conf: SparkConf = new SparkConf()
conf.set("spark.dynamicAllocation.minExecutors", "5");
conf.set("spark.dynamicAllocation.maxExecutors", "30");
conf.set("spark.dynamicAllocation.initialExecutors", "10");
.....
3. spark-defaults.conf usually located in $SPARK_HOME/conf/
Place the same configurations in spark-defaults.conf to apply for all spark applications if no configuration is passed from command-line as well as code.
Spark - Dynamic Allocation Confs
I just did a small demo with Spark's dynamic resource allocation. The code is on my Github. Specifically, the demo is in this release.

SparkConf not reading spark-submit arguments

SparkConf on pyspark does not read the configuration arguments passed to spark-submit.
My python code is something like
from pyspark import SparkContext, SparkConf
conf = SparkConf().setAppName("foo")
sc = SparkContext(conf=conf)
# processing code...
sc.stop()
and I submit it with
PYSPARK_PYTHON="/opt/anaconda/bin/python" spark-submit foo.py \
--master local[4] --conf="spark.driver.memory=16g" --executor-memory 16g
but none of the configuration arguments are applied. That is, the application is executed with the default values of local[*] for master, 1g for driver memory and 1g for executor memory. This was confirmed by the Spark GUI.
However, the configuration arguments are followed if I use pyspark to submit the application:
PYSPARK_PYTHON="/opt/anaconda/bin/python" pyspark --master local[4] \
--conf="spark.driver.memory=8g"
Notice that --executor-memory 16g was also changed to --conf="spark.executor.memory=16g" because the former doesn't work either.
What am I doing wrong?
I believe you need to remove the = sign from --conf=. Your spark-submit script should be
PYSPARK_PYTHON="/opt/anaconda/bin/python" spark-submit foo.py \
--master local[4] --conf spark.driver.memory=16g --executor-memory 16g
Note that spark-submit also supports setting driver memory with the flag --driver-memory 16G
Apparently, the order of the arguments matter. The last argument should be the name of the python script. So, the call should be
PYSPARK_PYTHON="/opt/anaconda/bin/python" spark-submit \
--master local[4] --conf="spark.driver.memory=16g" --executor-memory 16g foo.py
or, following #glennie-helles-sindholt's advise,
PYSPARK_PYTHON="/opt/anaconda/bin/python" spark-submit \
--master local[4] --driver-memory 16g --executor-memory 16g foo.py

Resources