how to run multi node on the spark-shell? - apache-spark

I have been using spark-submit to test my codes on the multi-nodes system.
(Of course, I specified the master option as the master server address to achieve multi-nodes environment).
However, instead of using spark-submit, I would like to use spark-shell to test my codes on the cluster system. However, I don't know how to configure multi-nodes clusters settings on the spark-shell?
I think that just using spark-shell without changing any setups will results in the local mode.
I tried to search the info and followed the below commands.
scala> sc.stop()
...
scala> import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.{SparkContext, SparkConf}
scala> val sc = new SparkContext(new SparkConf().setAppName("shell").setMaster("my server address"))
...
scala> import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.SQLContext
scala> val sqlContext = new SQLContext(sc)
sqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext#567a2954
However, I am quite sure that I am doing right behavior for the multi-node cluster setup using spark-shell.

Have you tried --master parameter of spark-shell? For Spark Standalone:
./spark-shell --master spark://master-ip:7077
Spark shell is just a driver, it will connect to any cluster you will write in master parameter
Edit:
For YARN use
./spark-shell --master yarn

If you used setMaster("my server address")) and "my server address" is not "local", then it won't run in local mode.
It is fine to set the master address in the code, but in production, you'd set --master parameter on the CLI to spark-shell or spark-submit
You can also write a separate .scala file, and pass that to spark-shell -i <filename>.scala

Related

How to create Pyspark application

My requirement is to read the data from HDFS using pyspark, filter only required columns, remove the NULL values and then writing back the processed data to HDFS. Once the these steps are completed, we need to deleted the RAW Dirty data from HDFS. Here is my script for each operations .
Import the Libraries and dependencies
#Spark Version = > version 2.4.0-cdh6.3.1
from pyspark.sql import SparkSession
sparkSession = SparkSession.builder.appName("example-pyspark-read-and-write").getOrCreate()
import pyspark.sql.functions as F
Read the Data from HDFS
df_load_1 = sparkSession.read.csv('hdfs:///cdrs/file_path/*.csv', sep = ";")
Select only the required columns
col = [ '_c0', '_c1', '_c2', '_c3', '_c5', '_c7', '_c8', '_c9', '_c10', '_C11', '_c12', '_c13', '_c22', '_C32', '_c34', '_c38', '_c40',
'_c43', '_c46', '_c47', '_c50', '_c52', '_c53', '_c54', '_c56', '_c57', '_c59', '_c62', '_c63','_c77', '_c81','_c83']
df1=df_load_1.select(*[col])
Check for NULL values and we have any remove them
df_agg_1 = df1.agg(*[F.count(F.when(F.isnull(c), c)).alias(c) for c in df1.columns])
df_agg_1.show()
df1 = df1.na.drop()
Writing the pre-processed data to HDFS, same cluster but different directory
df1.write.csv("hdfs://nm/pyspark_cleaned_data/py_in_gateway.csv")
Deleting the original raw data from HDFS
def delete_path(spark , path):
sc = spark.sparkContext
fs = (sc._jvm.org
.apache.hadoop
.fs.FileSystem
.get(sc._jsc.hadoopConfiguration())
)
fs.delete(sc._jvm.org.apache.hadoop.fs.Path(path), True)
Executing below by passing the HDFS absolute path
delete_path(spark , '/cdrs//cdrs/file_path/')
pyspark and HDFS commands
I am able to do all the operations successfully from pyspark prompt .
Now i want to develop the application and submit the job using spark-submit
For example
spark-submit --master yarn --deploy-mode client project.py for local
spark-submit --master yarn --deploy-mode cluster project.py for cluster
At this point i am stuck, i am not sure what parameter i am supposed to pass in place yarn in spark-submit. i am not sure whether simply copying and pasting all above commands and make .py file will help. I am very new to this technology.
Basically your spark job will run on a cluster. Spark 2.4.4 supports yarn, kubernetes, mesos and spark-standalone cluster doc.
--master yarn specifies that you are submitting your spark job to a yarn cluster.
--deploy-mode specifies whether to deploy your driver on the worker nodes (cluster) or locally as an external client (client) (default: client)
spark-submit --master yarn --deploy-mode client project.py for client mode
spark-submit --master yarn --deploy-mode cluster project.py for cluster mode
spark-submit --master local project.py for local mode
You can provide other arguments while submitting your spark job like --driver-memory, --executor-memory, --num-executors etc check here.

How can I run Pyspark interactively in Jupyter using YARN-client mode?

Now I have succeeded in running Pyspark in Jupyter in local mode by the second method as mentioned in this blog.
Here is the code:
import findspark
findspark.init()
from pyspark import SparkContext
sc = SparkContext("local", "First App")
I want to run it interactively in YARN-client mode,how can I do it?
Let's go futher,how to run in different modes,e.g.standalone mode and YARN-cluster mode.
Accrding to the Docs :
Master URLs accepts yarn parameter based on the HADOOP_CONF_DIR or YARN_CONF_DIR variable
So I can simply use:
sc = SparkContext("yarn-client", "First App")

Submit an application to a standalone spark cluster running in GCP from Python notebook

I am trying to submit a spark application to a standalone spark(2.1.1) cluster 3 VM running in GCP from my Python 3 notebook(running in local laptop) but for some reason spark session is throwing error "StandaloneAppClient$ClientEndpoint: Failed to connect to master sparkmaster:7077".
Environment Details: IPython and Spark Master are running in one GCP VM called "sparkmaster". 3 additional GCP VMs are running Spark workers and Cassandra Clusters. I connect from my local laptop(MBP) using Chrome to GCP VM IPython notebook in "sparkmaster"
Please note that terminal works:
bin/spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.1.1 --master spark://sparkmaster:7077 ex.py 1000
Running it from Python Notebook:
import os
os.environ["PYSPARK_SUBMIT_ARGS"] = '--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.1.1 pyspark-shell'
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
spark=SparkSession.builder.master("spark://sparkmaster:7077").appName('somatic').getOrCreate() #This step works if make .master('local')
df = spark \
.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "kafka1:9092,kafka2:9092,kafka3:9092") \
.option("subscribe", "gene") \
.load()
so far I have tried these:
I have tried to change spark master node spark-defaults.conf and spark-env.sh to add SPARK_MASTER_IP.
Tried to find the STANDALONE_SPARK_MASTER_HOST=hostname -f setting so that I can remove "-f". For some reason my spark master ui shows FQDN:7077 not hostname:7077
passed FQDN as param to .master() and os.environ["PYSPARK_SUBMIT_ARGS"]
Please let me know if you need more details.
After doing some more research I was able to resolve the conflict. It was due to a simple environment variable called SPARK_HOME. In my case it was pointing to Conda's /bin(pyspark was running from this location) whereas my spark setup was present in a diff. path. The simple fix was to add
export SPARK_HOME="/home/<<your location path>>/spark/" to .bashrc file( I want this to be attached to my profile not to the spark session)
How I have done it:
Step 1: ssh to master node in my case it was same as ipython kernel/server VM in GCP
Step 2:
cd ~
sudo nano .bashrc
scroll down to the last line and paste the below line
export SPARK_HOME="/home/your/path/to/spark-2.1.1-bin-hadoop2.7/"
ctrlX and Y and enter to save the changes
Note: I have also added few more details to the environment section for clarity.

Read files sent with spark-submit by the driver

I am sending a Spark job to run on a remote cluster by running
spark-submit ... --deploy-mode cluster --files some.properties ...
I want to read the content of the some.properties file by the driver code, i.e. before creating the Spark context and launching RDD tasks. The file is copied to the remote driver, but not to the driver's working directory.
The ways around this problem that I know of are:
Upload the file to HDFS
Store the file in the app jar
Both are inconvenient since this file is frequently changed on the submitting dev machine.
Is there a way to read the file that was uploaded using the --files flag during the driver code main method?
Yes, you can access files uploaded via the --files argument.
This is how I'm able to access files passed in via --files:
./bin/spark-submit \
--class com.MyClass \
--master yarn-cluster \
--files /path/to/some/file.ext \
--jars lib/datanucleus-api-jdo-3.2.6.jar,lib/datanucleus-rdbms-3.2.9.jar,lib/datanucleus-core-3.2.10.jar \
/path/to/app.jar file.ext
and in my Spark code:
val filename = args(0)
val linecount = Source.fromFile(filename).getLines.size
I do believe these files are downloaded onto the workers in the same directory as the jar is placed, which is why simply passing the filename and not the absolute path to Source.fromFile works.
After the investigation, I found one solution for above issue. Send the any.properties configuration during spark-submit and use it by spark driver before and after SparkSession initialization. Hope it will help you.
any.properties
spark.key=value
spark.app.name=MyApp
SparkTest.java
import com.typesafe.config.Config;
import com.typesafe.config.ConfigFactory;
public class SparkTest{
public Static void main(String[] args){
String warehouseLocation = new File("spark-warehouse").getAbsolutePath();
Config conf = loadConf();
System.out.println(conf.getString("spark.key"));
// Initialize SparkContext and use configuration from properties
SparkConf sparkConf = new SparkConf(true).setAppName(conf.getString("spark.app.name"));
SparkSession sparkSession =
SparkSession.builder().config(sparkConf).config("spark.sql.warehouse.dir", warehouseLocation)
.enableHiveSupport().getOrCreate();
JavaSparkContext javaSparkContext = new JavaSparkContext(sparkSession.sparkContext());
}
public static Config loadConf() {
String configFileName = "any.properties";
System.out.println(configFileName);
Config configs = ConfigFactory.load(ConfigFactory.parseFile(new java.io.File(configFileName)));
System.out.println(configs.getString("spark.key")); // get value from properties file
return configs;
}
}
Spark Submit:
spark-submit --class SparkTest --master yarn --deploy-mode client --files any.properties,yy-site.xml --jars ...........
use spark-submit --help, will find that this option is only for working directory of executor not driver.
--files FILES: Comma-separated list of files to be placed in the working directory of each executor.
The --files and --archives options support specifying file names with the # , just like Hadoop.
For example you can specify: --files localtest.txt#appSees.txt and this will upload the file you have locally named localtest.txt into Spark worker directory, but this will be linked to by the name appSees.txt, and your application should use the name as appSees.txt to reference it when running on YARN.
this works for my spark streaming application in both yarn/client and yarn/cluster mode.
In pyspark, I find it really interesting to achieve this easily, first arrange your working directory like this:
/path/to/your/workdir/
|--code.py
|--file.txt
and then in your code.py main function, just read the file as usual:
if __name__ == "__main__":
content = open("./file.txt").read()
then submit it without any specific configurations as follows:
spark-submit code.py
it runs correctly which amazes me. I suppose the submit process archives any files and sub-dir files altogether and sends them to the driver in pyspark, while you should archive them yourself in scala version. By the way, both --files and --archives options are working in worker not the driver, which means you can only access these files in RDD transformations or actions.
Here's a nice solution I developed in Python Spark in order to integrate any data as a file from outside to your Big Data platform.
Have fun.
# Load from the Spark driver any local text file and return a RDD (really useful in YARN mode to integrate new data at the fly)
# (See https://community.hortonworks.com/questions/38482/loading-local-file-to-apache-spark.html)
def parallelizeTextFileToRDD(sparkContext, localTextFilePath, splitChar):
localTextFilePath = localTextFilePath.strip(' ')
if (localTextFilePath.startswith("file://")):
localTextFilePath = localTextFilePath[7:]
import subprocess
dataBytes = subprocess.check_output("cat " + localTextFilePath, shell=True)
textRDD = sparkContext.parallelize(dataBytes.split(splitChar))
return textRDD
# Usage example
myRDD = parallelizeTextFileToRDD(sc, '~/myTextFile.txt', '\n') # Load my local file as a RDD
myRDD.saveAsTextFile('/user/foo/myTextFile') # Store my data to HDFS
A way around the problem is that you can create a temporary SparkContext simply by calling SparkContext.getOrCreate() and then read the file you passed in the --files with the help of SparkFiles.get('FILE').
Once you read the file retrieve all necessary configuration you required in a SparkConf() variable.
After that call this function:
SparkContext.stop(SparkContext.getOrCreate())
This will distroy the existing SparkContext and than in the next line simply initalize a new SparkContext with the necessary configurations like this.
sc = SparkContext(conf=conf).getOrCreate()
You got yourself a SparkContext with the desired settings

How to get SparkContext if Spark runs on Yarn?

We have a program based on Spark standalone, and in this program we use SparkContext and SqlContext to do lots of queries.
Now we want to deploy the system on a Spark which runs on Yarn. But when we modify the spark.master to yarn-cluster, the application throws an exception says this works with spark-submit type only. When we switch to yarn-client, although it no longer throws exceptions, it doesn't work properly.
It seems that if runs on Yarn, we can no longer use SparkContext to work, instead we should use something like yarn.Client, but in this way we don't know how to change our code to achieve what we have done before using SparkContext and SqlContext.
Is there a good way to solve this? Can we get SparkContext from yarn.Client or we should change our code to utilize new interfaces of yarn.Client?
Thank you!
When you run on cluster , you need to do a spark-submit like this
./bin/spark-submit \
--class <main-class> \
--master <master-url> \
--deploy-mode <deploy-mode> \
--conf <key>=<value> \
... # other options
<application-jar> \
--master will be yarn
--deploy-mode will be cluster
In your application if you have something like setMaster("local[]") , you can remove it and build the code. when you do spark-submit with --Master yarn, yarn will launch containers for you instead of spark-standalone scheduler.
Your app code can look like this without any setting for Master
val conf = new SparkConf().setAppName("App Name")
val sc = new SparkContext(conf)
yarn deploy mode client is use when you want to launch driver on same machine from code is running. On a cluster the deploy mode should be cluster, this will make sure driver is launched on one of the worker node by yarn.

Resources