I am working on Kafka streaming and trying to integrate it with Apache Spark. However, while running I am getting into issues. I am getting the below error.
This is the command I am using.
df_TR = Spark.readStream.format("kafka").option("kafka.bootstrap.servers", "localhost:9092").option("subscribe", "taxirides").load()
ERROR:
Py4JJavaError: An error occurred while calling o77.load.: java.lang.ClassNotFoundException: Failed to find data source: kafka. Please find packages at http://spark.apache.org/third-party-projects.html
How can I resolve this?
NOTE: I am running this in Jupyter Notebook
findspark.init('/home/karan/spark-2.1.0-bin-hadoop2.7')
import pyspark
from pyspark.sql import SparkSession
Spark = SparkSession.builder.appName('KafkaStreaming').getOrCreate()
from pyspark.sql.types import *
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
Everything is running fine till here (above code)
df_TR = Spark.readStream.format("kafka").option("kafka.bootstrap.servers", "localhost:9092").option("subscribe", "taxirides").load()
This is where things are going wrong (above code).
The blog which I am following: https://www.adaltas.com/en/2019/04/18/spark-streaming-data-pipelines-with-structured-streaming/
Edit
Using spark.jars.packages works better than PYSPARK_SUBMIT_ARGS
Ref - PySpark - NoClassDefFoundError: kafka/common/TopicAndPartition
It's not clear how you ran the code. Keep reading the blog, and you see
spark-submit \
...
--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.0 \
sstreaming-spark-out.py
Seems you missed adding the --packages flag
In Jupyter, you could add this
import os
# setup arguments
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.0'
# initialize spark
import pyspark, findspark
findspark.init()
Note: _2.11:2.4.0 need to align with your Scala and Spark versions... Based on the question, yours should be Spark 2.1.0
Related
I have two PCs, one of them is Ubuntu system that has Cassandra, and the other one is Windows PC.
I have made same installations of Java, Spark, Python and Scala versions on both PCs. My goal is read data with Jupyter Notebook using Spark from Cassandra that on other PC.
On the PC that has Cassandra, I was able to read data with connecting to Cassandra using Spark. But when I try to connect that Cassandra from remote client using Spark, I could not connect to Cassandra and get an error.
Representation of the system
Commands that run on Ubuntu PC which has Cassandra.
~/spark/bin ./pyspark --master spark://10.0.0.10:7077 --packages com.datastax.spark:spark-cassandra-connector_2.12:3.1.0 --conf spark.driver.extraJavaOptions=-Xss512m --conf spark.executer.extraJavaOptions=-Xss512m
from spark.sql.functions import col
host = {"spark.cassandra.connection.host":'10.0.0.10,10.0.0.11,10.0.0.12',"table":"table_one","keyspace":"log_keyspace"}
data_frame = sqlContext.read.format("org.apache.spark.sql.cassandra").options(**hosts).load()
a = data_frame.filter(col("col_1")<100000).select("col_1","col_2","col_3","col_4","col_5").toPandas()
As a result of the above codes running, the data received from Cassandra can be displayed.
Commands trying to get data by connecting to Cassandra from another PC.
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = ' --master spark://10.0.0.10:7077 --packages com.datastax.spark:spark-cassandra-connector_2.12:3.1.0 --conf spark.driver.extraJavaOptions=-Xss512m --conf spark.executer.extraJavaOptions=-Xss512m spark.cassandra.connection.host=10.0.0.10 pyspark '
import findspark
findspark.init()
findspark.find()
from pyspark import SparkContext SparkConf
from pyspark.sql import SparkSession
from pyspark.sql.functions import col
from pyspark.sql import SQLContext
conf = SparkConf().setAppName('example')
sc = pyspark.SparkContext(conf = conf)
spark = SparkSession(sc)
hosts ={"spark.cassandra.connection.host":'10.0.0.10',"table":"table_one","keyspace":"log_keyspace"}
sqlContext = SQLContext(sc)
data_frame = sqlContext.read.format("org.apache.spark.sql.cassandra").options(**hosts).load()
As a result of the above codes running, " :java.lang.ClassNotFoundException: Failed to find data source: org.apache.spark.sql.cassandra. Please find packages at http://spark.apache.org/third-party-projects.html " error occurs.
What can I do for fixing this error?
Trying to read data from bigquery to jupyter notebook with pyspark libraries. All of the apache spark and java hvae been downloaded to my C:Drive. Read and watched tutorial videos but none of them which seem to work. looking for guidance
Code:
import pyspark
import findspark
from pyspark import SparkContext,SparkConf
from pyspark.sql import SparkSession
from pyspark.sql.functions import window, col, year, month, aggregate, date_add,
timestamp_seconds, rank, split
from pyspark.sql.types import StructField, StructType, StringType, BooleanType, DoubleType,
StringType, IntegerType, FloatType
#import com.google.cloud.spark.bigquery
#this creates spark UI - check current spark session
spark =SparkSession.builder.master('local[*]').appName('conversions').enableHiveSupport().getOrCreate()
df = spark.read.format('bigquery').load('table')
df.show()
error:
Py4JJavaError: An error occurred while calling o253.load.
: java.lang.ClassNotFoundException:
Failed to find data source: bigquery. Please find packages at
http://spark.apache.org/third-party-projects.html
Please change the SparkSession creation to
spark =SparkSession.builder \
.master('local[*]') \
.appName('conversions') \
.enableHiveSupport() \
.config('spark.jars.packages', 'com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.23.2') \
.getOrCreate()
Also, please make sure you are using a python notebook rather than a pyspark notebook - otherwise Jupyter will create the SparkSession for you and no additional packages can be added.
See more documentation in the connector's repo.
I start pyspark SparkSessions in Jupyter Lab like this:
from pyspark.sql import SparkSession
import findspark
findspark.init(os.environ['SPARK_HOME'])
spark = (SparkSession.builder
.appName('myapp')
.master('yarn')
.config("spark.port.maxRetries", "1000")
.config('spark.executor.cores', "2")
.config("spark.executor.memory", "10g")
.config("spark.driver.memory", "4g")
#...
.getOrCreate()
)
And then a lot appears in the cell output...
WARNING: User-defined SPARK_HOME (/opt/cloudera/parcels/CDH-6.3.3-1.cdh6.3.3.p3969.3554875/lib/spark) overrides detected (/opt/cloudera/parcels/CDH/lib/spark).
WARNING: Running spark-class from user-defined location.
Picked up _JAVA_OPTIONS: -Djava.io.tmpdir=/hadooplog/sparktmp
Picked up _JAVA_OPTIONS: -Djava.io.tmpdir=/hadooplog/sparktmp
...
I would like to hide this output to clean up notebooks and make them easier to read. I've tried %%capture and spark.sparkContext.setLogLevel("ERROR") (although this only pertains to spark session logging, but even then, output still appears here and there). None of these work.
Running
pyspark version 2.4.0
jupyterlab version 3.2.1
I have to run a python script on EMR instance using pyspark to query dynamoDB. I am able to do that by querying dynamodb on pyspark which is executed by including jars with following command.
`pyspark --jars /usr/share/aws/emr/ddb/lib/emr-ddb-hive.jar,/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar`
I ran following python3 script to query data using pyspark python module.
import time
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession, HiveContext
start_time = time.time()
SparkContext.setSystemProperty("hive.metastore.uris", "thrift://nn1:9083")
sparkSession = (SparkSession
.builder
.appName('example-pyspark-read-and-write-from-hive')
.enableHiveSupport()
.getOrCreate())
df_load = sparkSession.sql("SELECT * FROM example")
df_load.show()
print(time.time() - start_time)
Which caused following runtime exception for missing jars.
java.lang.ClassNotFoundException Class org.apache.hadoop.hive.dynamodb.DynamoDBSerDe not found
How do I convert the pyspark --jars.. to a pythonic equivalent.
As of now I tried copying the jars from the location /usr/share/... to $SPARK_HOME/libs/jars and adding that path to spark-defaults.conf external class path that had no effect.
Use spark-submit command to execute your python script. Example :
spark-submit --jars /usr/share/aws/emr/ddb/lib/emr-ddb-hive.jar,/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar script.py
I get this error when I try to run my application using spark 2.0. I tried downloading the package from https://github.com/spark-packages/dstream-mqtt but the repositories don't exist. Also, I tried searching for the packages at https://spark-packages.org/ , but couldn't find any. My program is very simple,
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from dstream_mqtt import MQTTUtils
#from pyspark.streaming.mqtt import MQTTUtils
sc = SparkContext()
ssc = StreamingContext(sc, 6)
mqttStream = MQTTUtils.createStream(ssc,"tcp://192.168.4.54:1883","/test")
mqttStream.pprint()
mqttStream.saveAsTextFiles("test/status", "txt")
ssc.start()
ssc.awaitTermination()
ssc.stop()
I have downloaded and tried including the Jar files - spark-streaming-mqtt-assembly_2.11-1.6.2.jar and spark-streaming-mqtt_2.11-1.6.2.jar, but it is not helping.
The same code and packages just work fine with Spark 1.6.
Any help will be appreciated.