py4JJavaError: An error occurred while calling o253.load. : java.lang.ClassNotFoundException: Failed to find data source: bigquery - apache-spark

Trying to read data from bigquery to jupyter notebook with pyspark libraries. All of the apache spark and java hvae been downloaded to my C:Drive. Read and watched tutorial videos but none of them which seem to work. looking for guidance
Code:
import pyspark
import findspark
from pyspark import SparkContext,SparkConf
from pyspark.sql import SparkSession
from pyspark.sql.functions import window, col, year, month, aggregate, date_add,
timestamp_seconds, rank, split
from pyspark.sql.types import StructField, StructType, StringType, BooleanType, DoubleType,
StringType, IntegerType, FloatType
#import com.google.cloud.spark.bigquery
#this creates spark UI - check current spark session
spark =SparkSession.builder.master('local[*]').appName('conversions').enableHiveSupport().getOrCreate()
df = spark.read.format('bigquery').load('table')
df.show()
error:
Py4JJavaError: An error occurred while calling o253.load.
: java.lang.ClassNotFoundException:
Failed to find data source: bigquery. Please find packages at
http://spark.apache.org/third-party-projects.html

Please change the SparkSession creation to
spark =SparkSession.builder \
.master('local[*]') \
.appName('conversions') \
.enableHiveSupport() \
.config('spark.jars.packages', 'com.google.cloud.spark:spark-bigquery-with-dependencies_2.12:0.23.2') \
.getOrCreate()
Also, please make sure you are using a python notebook rather than a pyspark notebook - otherwise Jupyter will create the SparkSession for you and no additional packages can be added.
See more documentation in the connector's repo.

Related

Read CSV file on Spark

I am started working with Spark and found out one problem.
I tried reading CSV file using the below code:
df = spark.read.csv("/home/oybek/Serverspace/Serverspace/Athletes.csv")
df.show(5)
Error:
Py4JJavaError: An error occurred while calling o38.csv.
: java.lang.OutOfMemoryError: Java heap space
I am working in Linux Ubuntu, VirtualBox:~/Serverspace.
You can try changing the driver memory by creating a spark session variable like below:
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.master('local[*]') \
.config("spark.driver.memory", "4g") \
.appName('read-csv') \
.getOrCreate()

Hide SparkSession builder output in jupyter lab

I start pyspark SparkSessions in Jupyter Lab like this:
from pyspark.sql import SparkSession
import findspark
findspark.init(os.environ['SPARK_HOME'])
spark = (SparkSession.builder
.appName('myapp')
.master('yarn')
.config("spark.port.maxRetries", "1000")
.config('spark.executor.cores', "2")
.config("spark.executor.memory", "10g")
.config("spark.driver.memory", "4g")
#...
.getOrCreate()
)
And then a lot appears in the cell output...
WARNING: User-defined SPARK_HOME (/opt/cloudera/parcels/CDH-6.3.3-1.cdh6.3.3.p3969.3554875/lib/spark) overrides detected (/opt/cloudera/parcels/CDH/lib/spark).
WARNING: Running spark-class from user-defined location.
Picked up _JAVA_OPTIONS: -Djava.io.tmpdir=/hadooplog/sparktmp
Picked up _JAVA_OPTIONS: -Djava.io.tmpdir=/hadooplog/sparktmp
...
I would like to hide this output to clean up notebooks and make them easier to read. I've tried %%capture and spark.sparkContext.setLogLevel("ERROR") (although this only pertains to spark session logging, but even then, output still appears here and there). None of these work.
Running
pyspark version 2.4.0
jupyterlab version 3.2.1

Open parquet from GCS using local Pyspark

i have a folder on Google Cloud Storage with several parquet files. I installed in my VM pyspark and now i want to read the parquet files. Here's my code:
from pyspark.sql import SparkSession
spark = SparkSession\
.builder\
.config("spark.driver.maxResultSize", "40g") \
.config('spark.sql.shuffle.partitions', '2001') \
.config("spark.jars", "~/spark/spark-2.4.4-bin-hadoop2.7/jars/gcs-connector-hadoop2-latest.jar")\
.getOrCreate()
sc = spark.sparkContext
# using SQLContext to read parquet file
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
# to read parquet file
filename = "gs://path/to/parquet"
df = sqlContext.read.parquet(filename)
print(df.head())
When i run it, it gives me the following error:
WARN FileStreamSink: Error while looking for metadata directory.
To install pyspark i followed this tutorial: https://towardsdatascience.com/how-to-get-started-with-pyspark-1adc142456ec
Have you tried to read from GCS like this and then passing the data that you read? I do not think that you can read directly with pyspark.
I've been reading around about the error and in some instances it is raised when the file is not reachable or the path is incorrect. I think that might be it.

Pyspark Failed to find data source: kafka

I am working on Kafka streaming and trying to integrate it with Apache Spark. However, while running I am getting into issues. I am getting the below error.
This is the command I am using.
df_TR = Spark.readStream.format("kafka").option("kafka.bootstrap.servers", "localhost:9092").option("subscribe", "taxirides").load()
ERROR:
Py4JJavaError: An error occurred while calling o77.load.: java.lang.ClassNotFoundException: Failed to find data source: kafka. Please find packages at http://spark.apache.org/third-party-projects.html
How can I resolve this?
NOTE: I am running this in Jupyter Notebook
findspark.init('/home/karan/spark-2.1.0-bin-hadoop2.7')
import pyspark
from pyspark.sql import SparkSession
Spark = SparkSession.builder.appName('KafkaStreaming').getOrCreate()
from pyspark.sql.types import *
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
Everything is running fine till here (above code)
df_TR = Spark.readStream.format("kafka").option("kafka.bootstrap.servers", "localhost:9092").option("subscribe", "taxirides").load()
This is where things are going wrong (above code).
The blog which I am following: https://www.adaltas.com/en/2019/04/18/spark-streaming-data-pipelines-with-structured-streaming/
Edit
Using spark.jars.packages works better than PYSPARK_SUBMIT_ARGS
Ref - PySpark - NoClassDefFoundError: kafka/common/TopicAndPartition
It's not clear how you ran the code. Keep reading the blog, and you see
spark-submit \
...
--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.0 \
sstreaming-spark-out.py
Seems you missed adding the --packages flag
In Jupyter, you could add this
import os
# setup arguments
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.0'
# initialize spark
import pyspark, findspark
findspark.init()
Note: _2.11:2.4.0 need to align with your Scala and Spark versions... Based on the question, yours should be Spark 2.1.0

How to run python spark script with specific jars

I have to run a python script on EMR instance using pyspark to query dynamoDB. I am able to do that by querying dynamodb on pyspark which is executed by including jars with following command.
`pyspark --jars /usr/share/aws/emr/ddb/lib/emr-ddb-hive.jar,/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar`
I ran following python3 script to query data using pyspark python module.
import time
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession, HiveContext
start_time = time.time()
SparkContext.setSystemProperty("hive.metastore.uris", "thrift://nn1:9083")
sparkSession = (SparkSession
.builder
.appName('example-pyspark-read-and-write-from-hive')
.enableHiveSupport()
.getOrCreate())
df_load = sparkSession.sql("SELECT * FROM example")
df_load.show()
print(time.time() - start_time)
Which caused following runtime exception for missing jars.
java.lang.ClassNotFoundException Class org.apache.hadoop.hive.dynamodb.DynamoDBSerDe not found
How do I convert the pyspark --jars.. to a pythonic equivalent.
As of now I tried copying the jars from the location /usr/share/... to $SPARK_HOME/libs/jars and adding that path to spark-defaults.conf external class path that had no effect.
Use spark-submit command to execute your python script. Example :
spark-submit --jars /usr/share/aws/emr/ddb/lib/emr-ddb-hive.jar,/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar script.py

Resources