My SparkSession initialization takes forever to run on my laptop. Does anybody have any idea why? - apache-spark

My SparkSession takes forever to initialize
from pyspark.sql import SparkSession
spark = (SparkSession
.builder
.appName('Huy')
.getOrCreate())
sc = spark.SparkContext
waited for hours without success

I got the same error. I have resolved it by setting the environment variables. We can set them directly in python code. You need a JDK in the program files.
import os
os.environ["JAVA_HOME"] = "C:\Program Files\Java\jdk-19"
os.environ["SPARK_HOME"] = "C:\Program Files\Spark\spark-3.3.1-bin-hadoop2"

Related

Hide SparkSession builder output in jupyter lab

I start pyspark SparkSessions in Jupyter Lab like this:
from pyspark.sql import SparkSession
import findspark
findspark.init(os.environ['SPARK_HOME'])
spark = (SparkSession.builder
.appName('myapp')
.master('yarn')
.config("spark.port.maxRetries", "1000")
.config('spark.executor.cores', "2")
.config("spark.executor.memory", "10g")
.config("spark.driver.memory", "4g")
#...
.getOrCreate()
)
And then a lot appears in the cell output...
WARNING: User-defined SPARK_HOME (/opt/cloudera/parcels/CDH-6.3.3-1.cdh6.3.3.p3969.3554875/lib/spark) overrides detected (/opt/cloudera/parcels/CDH/lib/spark).
WARNING: Running spark-class from user-defined location.
Picked up _JAVA_OPTIONS: -Djava.io.tmpdir=/hadooplog/sparktmp
Picked up _JAVA_OPTIONS: -Djava.io.tmpdir=/hadooplog/sparktmp
...
I would like to hide this output to clean up notebooks and make them easier to read. I've tried %%capture and spark.sparkContext.setLogLevel("ERROR") (although this only pertains to spark session logging, but even then, output still appears here and there). None of these work.
Running
pyspark version 2.4.0
jupyterlab version 3.2.1

Pyspark Failed to find data source: kafka

I am working on Kafka streaming and trying to integrate it with Apache Spark. However, while running I am getting into issues. I am getting the below error.
This is the command I am using.
df_TR = Spark.readStream.format("kafka").option("kafka.bootstrap.servers", "localhost:9092").option("subscribe", "taxirides").load()
ERROR:
Py4JJavaError: An error occurred while calling o77.load.: java.lang.ClassNotFoundException: Failed to find data source: kafka. Please find packages at http://spark.apache.org/third-party-projects.html
How can I resolve this?
NOTE: I am running this in Jupyter Notebook
findspark.init('/home/karan/spark-2.1.0-bin-hadoop2.7')
import pyspark
from pyspark.sql import SparkSession
Spark = SparkSession.builder.appName('KafkaStreaming').getOrCreate()
from pyspark.sql.types import *
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
Everything is running fine till here (above code)
df_TR = Spark.readStream.format("kafka").option("kafka.bootstrap.servers", "localhost:9092").option("subscribe", "taxirides").load()
This is where things are going wrong (above code).
The blog which I am following: https://www.adaltas.com/en/2019/04/18/spark-streaming-data-pipelines-with-structured-streaming/
Edit
Using spark.jars.packages works better than PYSPARK_SUBMIT_ARGS
Ref - PySpark - NoClassDefFoundError: kafka/common/TopicAndPartition
It's not clear how you ran the code. Keep reading the blog, and you see
spark-submit \
...
--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.0 \
sstreaming-spark-out.py
Seems you missed adding the --packages flag
In Jupyter, you could add this
import os
# setup arguments
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.0'
# initialize spark
import pyspark, findspark
findspark.init()
Note: _2.11:2.4.0 need to align with your Scala and Spark versions... Based on the question, yours should be Spark 2.1.0

How to run python spark script with specific jars

I have to run a python script on EMR instance using pyspark to query dynamoDB. I am able to do that by querying dynamodb on pyspark which is executed by including jars with following command.
`pyspark --jars /usr/share/aws/emr/ddb/lib/emr-ddb-hive.jar,/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar`
I ran following python3 script to query data using pyspark python module.
import time
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession, HiveContext
start_time = time.time()
SparkContext.setSystemProperty("hive.metastore.uris", "thrift://nn1:9083")
sparkSession = (SparkSession
.builder
.appName('example-pyspark-read-and-write-from-hive')
.enableHiveSupport()
.getOrCreate())
df_load = sparkSession.sql("SELECT * FROM example")
df_load.show()
print(time.time() - start_time)
Which caused following runtime exception for missing jars.
java.lang.ClassNotFoundException Class org.apache.hadoop.hive.dynamodb.DynamoDBSerDe not found
How do I convert the pyspark --jars.. to a pythonic equivalent.
As of now I tried copying the jars from the location /usr/share/... to $SPARK_HOME/libs/jars and adding that path to spark-defaults.conf external class path that had no effect.
Use spark-submit command to execute your python script. Example :
spark-submit --jars /usr/share/aws/emr/ddb/lib/emr-ddb-hive.jar,/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar script.py

Apache Spark 2.0 MQTT - Err "No module named mqtt"

I get this error when I try to run my application using spark 2.0. I tried downloading the package from https://github.com/spark-packages/dstream-mqtt but the repositories don't exist. Also, I tried searching for the packages at https://spark-packages.org/ , but couldn't find any. My program is very simple,
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from dstream_mqtt import MQTTUtils
#from pyspark.streaming.mqtt import MQTTUtils
sc = SparkContext()
ssc = StreamingContext(sc, 6)
mqttStream = MQTTUtils.createStream(ssc,"tcp://192.168.4.54:1883","/test")
mqttStream.pprint()
mqttStream.saveAsTextFiles("test/status", "txt")
ssc.start()
ssc.awaitTermination()
ssc.stop()
I have downloaded and tried including the Jar files - spark-streaming-mqtt-assembly_2.11-1.6.2.jar and spark-streaming-mqtt_2.11-1.6.2.jar, but it is not helping.
The same code and packages just work fine with Spark 1.6.
Any help will be appreciated.

Spark : Error Not found value SC

I have just started with Spark. I have CDH5 Installed with Spark . However when I try to use sparkcontext it gives Error as below
<console>:17: error: not found: value sc
val distdata = sc.parallelize(data)
I have researched about this and found error: not found: value sc
and tried to start spark context with ./spark-shell . It gives error No such File or Directory
You can either start spark-shell starting with ./ if you're in its exact directory or path/to/spark-shell if you're elsewhere.
Also, if you're running a script with spark-submit, you need to initialize sc as SparkContext first:
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
val conf = new SparkConf().setAppName("Simple Application")
val sc = new SparkContext(conf)
There is another stackoverflow post that answers this question by getting sc(spark context) from spark session. I do it this way:
val spark = SparkSession.builder().appName("app_name").enableHiveSupport().getOrCreate()
val sc = spark.sparkContext
original answer here:
Retrieve SparkContext from SparkSession
Add spark directory to path then you may use spark-shell from anywhere.
Add import org.apache.spark.SparkContext if you are using it in a spark-submit job to create a spark context using:
val sc = new SparkContext(conf)
where conf is already defined.
Starting a new terminal fixes the problem in my case.
You need to run Hadoop daemons first (run this command "start-all.sh"). Then try
you ca run this command in spark(scala) prompt
conf.set("spark.driver.allowMultipleContexts","true")

Resources