JavaPackage object is not callable error for pydeequ constraint suggestion - apache-spark

I'm getting a "JavaPackage object is not callable" error while trying to run the PyDeequ constraint suggestion method on databricks.
I have tried running this code on Apache Spark 3.1.2 cluster as well as Apache Spark 3.0.1 cluster but no luck.
suggestionResult = ConstraintSuggestionRunner(spark).onData(df).addConstraintRule(DEFAULT()).run()
print(suggestionResult)
Please refer to the second screenshot attached for the expanded error status.
PyDeequ error screenshot
Expanded PyDeequ error screenshot

I was able to combine some solutions found here, as well as other solutions, to get past the above JavaPackage error in Azure Databricks. Here are the details, if helpful for anyone.
From this link, I downloaded the appropriate JAR file to match my Spark version. In my case, that was deequ_2_0_1_spark_3_2.jar. I then installed this file using the JAR type under Libraries in my cluster configurations.
The following then worked, ran in different cells in a notebook.
%pip install pydeequ
%sh export SPARK_VERSION=3.2.1
df = spark.read.load("abfss://container-name#account.dfs.core.windows.net/path/to/data")
from pyspark.sql import SparkSession
import pydeequ
spark = (SparkSession
.builder
.getOrCreate())
from pydeequ.analyzers import *
analysisResult = AnalysisRunner(spark) \
.onData(df) \
.addAnalyzer(Size()) \
.addAnalyzer(Completeness("column_name")) \
.run()
analysisResult_df = AnalyzerContext.successMetricsAsDataFrame(spark, analysisResult)
analysisResult_df.show()

Related

ModuleNotFoundError: No module named 'pyspark.sql' when using with EMR Serverless

I'm getting the following error when I'm trying to run a job on EMR serverless -
ModuleNotFoundError: No module named 'pyspark.sql'. Please refer to user guide on how to use python libraries with EMR Serverless.
It happens when I try to import pyspark.sql in a python file located within a zip package.
The file -
pyspark.zip
|--__init__.py
|--spark.py
The content -
#__init__.py
from .spark import *
#spark.py
from pyspark.sql import SparkSession
def run():
print("Create Spark Session")
spark_session = SparkSession\
.builder\
.appName("First pyspark project")\
.getOrCreate()
The spark property I gave to the job -
--conf spark.submit.pyFiles=s3://my-bucket/pyspark.zip
--conf spark.executorEnv.PYSPARK_PYTHON=python
I am afraid I missed something. Should I install it or something?
All I did was compress the project into a zip file and upload it to S3.

How do I connect Spark to JDBC driver in Zeppelin?

I am trying to pull in data from a SQL server to a Hive table using Spark in a Zeppelin notebook.
I am trying to run the following code:
%pyspark
from pyspark import SparkContext
from pyspark.sql import SparkSession
from pyspark.sql.dataframe import DataFrame
from pyspark.sql.functions import *
spark = SparkSession.builder \
.appName('sample') \
.getOrCreate()
#set url, table, etc.
df = spark.read.format('jdbc') \
.option('url', url) \
.option('driver', 'com.microsoft.sqlserver.jdbc.SQLServerDriver') \
.option('dbtable', table) \
.option('user', user) \
.option('password', password) \
.load()
However, I keep getting the exception:
...
Py4JJavaError: An error occurred while calling o81.load.
: java.lang.ClassNotFoundException: com.microsoft.sqlserver.jdbc.SQLServerDriver
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
...
I have been trying to figure this out all day and I believe something is wrong with how I am trying to set up the driver. I have a driver under /tmp/sqljdbc42.jar on the instance. Can you please explain how I can let Spark know where this driver is? I have tried many different ways both through the shell and through the interpreter editor.
Thanks!
EDIT
I also should note that I loaded the jar to my instance throug Zeppelin's shell (%sh) using
curl -o /tmp/sqljdbc42.jar http://central.maven.org/maven2/com/microsoft/sqlserver/mssql-jdbc/6.4.0.jre8/mssql-jdbc-6.4.0.jre8.jar
pyspark --driver-class-path /tmp/sqljdbc42.jar --jars /tmp/sqljdbc42.jar
Here is how I fixed this:
scp driver jar onto the cluster driver node
Go to Zeppelin interpreter and scroll to the Spark section then click edit.
Write the complete path to the jar under artifacts e.g. /home/Hadoop/mssql-jdbc.jar and nothing else.
Click save.
Then you should be good!
You can add it through Web UI in Interpreter settings as follow:
Click Interpreter in menu
Click 'edit' button in the Spark interpreter
Add the path for the jar in the artifact field
Then just save and restart interpreter.
Similar to Tomas, you can add the driver (or any library) using maven in the interpreter:
Click Interpreter in menu
Click 'edit' button in the Spark interpreter
Add the path for the jar in the artifact field
Add the groupId:artifactId:version
For example, in your case, you can use com.microsoft.sqlserver:mssql-jdbc:jar:8.4.1.jre8 in artifact field.
When you restart the interpreter, it will download and add the dependency for you.

Setting spark.local.dir in Pyspark/Jupyter

I'm using Pyspark from a Jupyter notebook and attempting to write a large parquet dataset to S3.
I get a 'no space left on device' error. I searched around and learned that it's because /tmp is filling up.
I want to now edit spark.local.dir to point to a directory that has space.
How can I set this parameter?
Most solutions I found suggested setting it when using spark-submit. However, I am not using spark-submit and just running it as a script from Jupyter.
Edit: I'm using Sparkmagic to work with an EMR backend.I think spark.local.dir needs to be set in the config JSON, but I am not sure how to specify it there.
I tried adding it in session_configs but it didn't work.
The answer depends on where your SparkContext comes from.
If you are starting Jupyter with pyspark:
PYSPARK_DRIVER_PYTHON='jupyter'\
PYSPARK_DRIVER_PYTHON_OPTS="notebook" \
PYSPARK_PYTHON="python" \
pyspark
then your SparkContext is already initialized when you receive your Python kernel in Jupyter. You should therefore pass a parameter to pyspark (at the end of the command above): --conf spark.local.dir=...
If you are constructing a SparkContext in Python
If you have code in your notebook like:
import pyspark
sc = pyspark.SparkContext()
then you can configure the Spark context before creating it:
import pyspark
conf = pyspark.SparkConf()
conf.set('spark.local.dir', '...')
sc = pyspark.SparkContext(conf=conf)
Configuring Spark from the command line:
It's also possible to configure Spark by editing a configuration file in bash. The file you want to edit is ${SPARK_HOME}/conf/spark-defaults.conf. You can append to it as follows (creating it if it doesn't exist):
echo 'spark.local.dir /foo/bar' >> ${SPARK_HOME}/conf/spark-defaults.conf

Submit an application to a standalone spark cluster running in GCP from Python notebook

I am trying to submit a spark application to a standalone spark(2.1.1) cluster 3 VM running in GCP from my Python 3 notebook(running in local laptop) but for some reason spark session is throwing error "StandaloneAppClient$ClientEndpoint: Failed to connect to master sparkmaster:7077".
Environment Details: IPython and Spark Master are running in one GCP VM called "sparkmaster". 3 additional GCP VMs are running Spark workers and Cassandra Clusters. I connect from my local laptop(MBP) using Chrome to GCP VM IPython notebook in "sparkmaster"
Please note that terminal works:
bin/spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.1.1 --master spark://sparkmaster:7077 ex.py 1000
Running it from Python Notebook:
import os
os.environ["PYSPARK_SUBMIT_ARGS"] = '--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.1.1 pyspark-shell'
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
spark=SparkSession.builder.master("spark://sparkmaster:7077").appName('somatic').getOrCreate() #This step works if make .master('local')
df = spark \
.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "kafka1:9092,kafka2:9092,kafka3:9092") \
.option("subscribe", "gene") \
.load()
so far I have tried these:
I have tried to change spark master node spark-defaults.conf and spark-env.sh to add SPARK_MASTER_IP.
Tried to find the STANDALONE_SPARK_MASTER_HOST=hostname -f setting so that I can remove "-f". For some reason my spark master ui shows FQDN:7077 not hostname:7077
passed FQDN as param to .master() and os.environ["PYSPARK_SUBMIT_ARGS"]
Please let me know if you need more details.
After doing some more research I was able to resolve the conflict. It was due to a simple environment variable called SPARK_HOME. In my case it was pointing to Conda's /bin(pyspark was running from this location) whereas my spark setup was present in a diff. path. The simple fix was to add
export SPARK_HOME="/home/<<your location path>>/spark/" to .bashrc file( I want this to be attached to my profile not to the spark session)
How I have done it:
Step 1: ssh to master node in my case it was same as ipython kernel/server VM in GCP
Step 2:
cd ~
sudo nano .bashrc
scroll down to the last line and paste the below line
export SPARK_HOME="/home/your/path/to/spark-2.1.1-bin-hadoop2.7/"
ctrlX and Y and enter to save the changes
Note: I have also added few more details to the environment section for clarity.

How to connect to Cloudand/CouchDB using SparkSQL in DataScience Experience?

formerly CouchDB was supported via the cloudant connector:
https://github.com/cloudant-labs/spark-cloudant
But this project states that it is no longer active and that it moved to Apache Bahir:
http://bahir.apache.org/docs/spark/2.1.1/spark-sql-cloudant/
So I've installed the JAR in a Scala notebook using the following command:
%AddJar
http://central.maven.org/maven2/org/apache/bahir/spark-sql-cloudant_2.11/2.1.1/spark-sql-cloudant_2.11-2.1.1.jar
Then, from a python notebook, after restarting the kernel, I use the following code to test:
spark = SparkSession\
.builder\
.appName("Cloudant Spark SQL Example in Python using dataframes")\
.config("cloudant.host","0495289b-1beb-4e6d-888e-315f36925447-bluemix.cloudant.com")\
.config("cloudant.username", "0495289b-1beb-4e6d-888e-315f36925447-bluemix")\
.config("cloudant.password","xxx")\
.config("jsonstore.rdd.partitions", 8)\
.getOrCreate()
# ***1. Loading dataframe from Cloudant db
df = spark.read.load("openspace", "org.apache.bahir.cloudant")
df.cache()
df.printSchema()
df.show()
But I get:
java.lang.ClassNotFoundException: org.apache.bahir.cloudant.DefaultSource
(gist of full log)
There is one workaround, it should run in all sorts of jupyther notebook environments and is not exclusive to IBM DataScience Experience:
!pip install --upgrade pixiedust
import pixiedust
pixiedust.installPackage("cloudant-labs:spark-cloudant:2.0.0-s_2.11")
This is of course a workaround, will post the official answer once awailable
EDIT:
Don't forget the restart the jupyter kernel afterwards
EDIT 24.12.18:
Created a yt video on this without workaround, see comments...will update this post as well at a later stage...
Another workaround below. It has been tested and works in DSX Python notebooks:
import pixiedust
# Use play-json version 2.5.9. Latest version is not supported at this time.
pixiedust.installPackage("com.typesafe.play:play-json_2.11:2.5.9")
# Get the latest sql-cloudant library
pixiedust.installPackage("org.apache.bahir:spark-sql-cloudant_2.11:0")
spark = SparkSession\
.builder\
.appName("Cloudant Spark SQL Example in Python using dataframes")\
.config("cloudant.host", host)\
.config("cloudant.username", username)\
.config("cloudant.password", password)\
.getOrCreate()
df = spark.read.load(format="org.apache.bahir.cloudant", database="MY-DB")

Resources