Error while exporting spark sql dataframe to csv - apache-spark

I have referred the following links in order to understand how to export spark sql dataframe in python
https://github.com/databricks/spark-csv
How to export data from Spark SQL to CSV
My code:
df = sqlContext.createDataFrame(routeRDD, ['Consigner', 'AverageScore', 'Trips'])
df.select('Consigner', 'AverageScore', 'Trips').write.format('com.databricks.spark.csv').options(header='true').save('file:///opt/BIG-DATA/VisualCargo/output/top_consigner.csv')
I load the job with spark-submit passing the following jars on master url
spark-csv_2.11-1.5.0.jar, commons-csv-1.4.jar
I am getting the following error
df.select('Consigner', 'AverageScore', 'Trips').write.format('com.databricks.spark.csv').options(header='true').save('file:///opt/BIG-DATA/VisualCargo/output/top_consigner.csv')
File "/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 332, in save
File "/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 36, in deco
File "/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o156.save.
py4j.protocol.Py4JJavaError: An error occurred while calling o156.save.
: java.lang.NoSuchMethodError: scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
at com.databricks.spark.csv.util.CompressionCodecs$.<init>(CompressionCodecs.scala:29)
at com.databricks.spark.csv.util.CompressionCodecs$.<clinit>(CompressionCodecs.scala)
at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:198)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:170)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:146)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:137)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:745)

Spark version 1.5.0-cdh5.5.1 is built with Scala 2.10 - default Scala version for Spark < 2.0. Your spark-csv is built with Scala 2.10 - spark-csv_2.11-1.5.0.jar.
Please update spark-csv to version with Scala 2.10 or update Spark to Scala 2.11. You will know Scala version by number after artifactId, i.e. spark-csv_2.10-1.5.0 will be for Scala 2.10

I am running Spark on Windows and I faced the similar issue of not able to write to file(CSV or Parquet). Upon reading more into Spark website, I found the below error, and it is because of the winutils version that I am using. I changed it to 64bit and it worked. Hope this helps some one.
Spark Log

Related

Error while reading data from COSMOS DB using azure-cosmos-spark_3-2_2-12-4.10.0.jar on DataBricks Runtime 10.4 LTS

I have pyspark code deployed on azure databricks, which basically reads and writes data to and from cosmosDB.
In previous months, I used:
DataBricks runtime Version: 6.4(Extended Support) with Scala 2.11 and Spark 2.45
Azure-cosmosdb-spark connector: azure-cosmosdb-spark_2.4.0_2.11-3.7.0-uber.jar
Program runs fine with out any errors.
But, when I upgraded my runtime version to:
DataBricks runtime Version: 10.4 LTS with Scala 2.12 and Spark 3.2.1
Azure-cosmosdb-spark connector: azure-cosmos-spark_3-2_2-12-4.10.0.jar
It throws me a error saying:
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<command-2873073042801780> in <module>
14 #Reading data from cosmosdb
15 #try:
---> 16 input_data_cosmosdb = spark.read.format("com.microsoft.azure.cosmosdb.spark").options(**ReadConfig_input_cosmos).load()
17 print("Cosmos DB columns ",input_data_cosmosdb.columns)
18 #except:
/databricks/spark/python/pyspark/sql/readwriter.py in load(self, path, format, schema, **options)
162 return self._df(self._jreader.load(self._spark._sc._jvm.PythonUtils.toSeq(path)))
163 else:
--> 164 return self._df(self._jreader.load())
165
166 def json(self, path, schema=None, primitivesAsString=None, prefersDecimal=None,
Py4JJavaError: An error occurred while calling o663.load.
: java.lang.NoSuchMethodError: scala.Predef$.refArrayOps([Ljava/lang/Object;)Lscala/collection/mutable/ArrayOps;
at com.microsoft.azure.cosmosdb.spark.config.Config$.getOptionsFromConf(Config.scala:281)
at com.microsoft.azure.cosmosdb.spark.config.Config$.apply(Config.scala:229)
at com.microsoft.azure.cosmosdb.spark.DefaultSource.createRelation(DefaultSource.scala:55)
at com.microsoft.azure.cosmosdb.spark.DefaultSource.createRelation(DefaultSource.scala:40)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:385)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:356)
at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:323)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:323)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:222)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
at py4j.Gateway.invoke(Gateway.java:295)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:251)
at java.lang.Thread.run(Thread.java:748)
I have tried all the solutions available on internet. I know it comes when I am using 2.11 libraries on 2.12 project, but i am using 2.12 project and 2.12 spark connector itself. Yet i get the error.
Please let me know if any one is using the same environment. I use Databricks on Azure. Looking for a solution.
Any answers related to Databricks are really appreciated.

How to resolve spark error when reading from s3

I am getting an error(java.io.IOException: No FileSystem for scheme: S3a) when running a spark application. I have looked through various other questions regarding this type of error, but Im not able to determine the solution. Spark is version 3.1.2
Updated details below to reflect current state
pyspark script:
import os
#os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.4 pyspark-shell'
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName("s3reader") \
.getOrCreate()\
sc = spark.sparkContext
#sc._jsc.hadoopConfiguration().set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
#sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", "xxxxxxx")
#sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", "xxxxxxxxxxxx")
#sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint","xxx.x.xxx.x.com", "us-1-east")
#sc._jsc.hadoopConfiguration().set("fs.s3a.path.style.access", "true")
df = spark.read.json("S3a://silver/testfolder/4a2426b2-856c-4e9b-b698-b3dcdca74f48")
print(df)
here are my jar versions:
cloud#spark-dev-master:/usr/local/spark/jars$ ls -ltr *aws*
-rw-rw-r-- 1 cloud cloud 126287 Aug 18 2016 hadoop-aws-2.7.4.jar
-rw-rw-r-- 1 cloud cloud 4479 Sep 17 02:36 aws-java-sdk-1.7.4.jar
stack trace:
Traceback (most recent call last):
File "/home/cloud/sparks3test.py", line 18, in <module>
df = spark.read.json("S3a://silver/testfolder/4a2426b2-856c-4e9b-b698-b3dcdca74f48")
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 372, in json
File "/usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco
File "/usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o33.json.
: java.io.IOException: No FileSystem for scheme: S3a
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:46)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:377)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:325)
at org.apache.spark.sql.DataFrameReader.$anonfun$load$3(DataFrameReader.scala:307)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:307)
at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:519)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:829)
You need to use hadoop-aws version 3.2.0.
You can refer my previous answer here.
I am getting an error(java.lang.NoClassDefFoundError:org/apache/hadoop/fs/StreamCapabilities)
This is what you see when you mix hadoop-aws and hadoop-common JAR versions. They must match point for point (as spark JARs also require).
Do not attempt to work around this except by syncing up JARs, you will only be moving stack traces around.
See Hadoop troubleshooting s3a
As there still appeared to be jar dependecies issues, I did a fresh install on spark using 3.1.2 and hadoop 3.2.0 and aligned hadoop-aws and java-sdk jars with aws-common jar version on the master and worker nodes. This corrected the file system issue. Consequently, upgrading to 3.2.0 also corrected the endpoint issue we were running to as well as path.style.access=true is not supported in any hadoop version older than 2.8.0. That issue was documented here: https://issues.apache.org/jira/browse/HADOOP-12963 for reference.

ClassNotFoundException loading data from snowflake with pyspark

I am getting this error when I try to load data from snowflake into a dataframe with pyspark:
py4j.protocol.Py4JJavaError: An error occurred while calling o45.load.
: java.lang.NoClassDefFoundError: net/snowflake/client/jdbc/internal/org/bouncycastle/jce/provider/BouncyCastleProvider
Here is some code to reproduce the error:
from pyspark.sql import SparkSession
from pyspark import SparkConf
conf = SparkConf()
conf.set('spark.jars.packages',
'net.snowflake:spark-snowflake_2.11:2.8.4-spark_2.4,net.snowflake:snowflake-jdbc:3.12.17')
spark = SparkSession.builder.config(conf=conf).getOrCreate()
sf_reader_options = {'sfURL': 'example.snowflakecomputing.com', 'sfAccount': 'example_account',
'sfWarehouse': 'example_warehouse', 'sfRole': 'DATASCIENCE', 'sfUser': 'user',
'sfPassword': 'pass', 'sfDatabase': 'db_name', 'sfSchema': 'schema_name', 'sfTimezone': 'UTC'}
reader = (spark
.read
.format('net.snowflake.spark.snowflake')
.options(**sf_reader_options))
result = reader.option('query', 'select * from TABLE_NAME').load()
The stacktrace for the error looks like this:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/charlie/lark/bigbird/venv/lib/python3.7/site-packages/pyspark/sql/readwriter.py", line 172, in load
return self._df(self._jreader.load())
File "/Users/charlie/lark/bigbird/venv/lib/python3.7/site-packages/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/Users/charlie/lark/bigbird/venv/lib/python3.7/site-packages/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/Users/charlie/lark/bigbird/venv/lib/python3.7/site-packages/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o45.load.
: java.lang.NoClassDefFoundError: net/snowflake/client/jdbc/internal/org/bouncycastle/jce/provider/BouncyCastleProvider
at net.snowflake.spark.snowflake.Parameters$.mergeParameters(Parameters.scala:202)
at net.snowflake.spark.snowflake.DefaultSource.createRelation(DefaultSource.scala:59)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:242)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:230)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:186)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: net.snowflake.client.jdbc.internal.org.bouncycastle.jce.provider.BouncyCastleProvider
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 17 more
I am using spark 2.4.7 and spark-snowflake 2.8.4, with snowflake jdbc 3.12.17. I am on Mac OS X Big Sur. This happened after I upgraded to Big Sur, though I'm not sure whether that's related.
I have tried:
adding bouncy castle provider to my configuration as a package dependency
checking that JAVA_HOME points to Java 8 (it does)
reinstalling java 8 (with homebrew and adoptopenjdk)
adding bouncy castle as a security provider, per instructions here
updating spark-snowflake and snowflake-jdbc (was using 2.7.0 and 3.12.3 before, same error)
Any help would be much appreciated!
Ultimately, I was able to resolve this by:
downloading Java straight from Oracle (rather than uninstalling and reinstalling with homebrew),
deleting spark, downloading again (from apache, not via homebrew), and setting up environment variables as described here (mostly... I use a virtual environment so I didn't hardcode PYSPARK_PYTHON to system python3)
uninstalling pyspark and reinstalling
quitting pycharm and reopening (this refreshed all my environment variables that were set in .zshrc, like JAVA_HOME)
There's almost certainly an easier way, but this worked.

Spark Kafka streaming in spark 2.3.0 with python

I recently upgraded to Spark 2.3.0. I had a existing spark job which used to run on spark 2.2.0.
I am facing the Java Exception of AbstractMethodError
My simple code:
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
if __name__ == "__main__":
print "Here it is!"
sc = SparkContext(appName="Tester")
ssc = StreamingContext(sc, 1)
This is working fine with Spark 2.2.0
With Spark spark 2.3.0, I am getting the following exception:
ssc = StreamingContext(sc, 1)
File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/streaming/context.py", line 61, in __init__
File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/streaming/context.py", line 65, in _initialize_context
File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1428, in __call__
File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py", line 320, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.streaming.api.java.JavaStreamingContext.
: java.lang.AbstractMethodError
at org.apache.spark.util.ListenerBus$class.$init$(ListenerBus.scala:35)
at org.apache.spark.streaming.scheduler.StreamingListenerBus.<init>(StreamingListenerBus.scala:30)
at org.apache.spark.streaming.scheduler.JobScheduler.<init>(JobScheduler.scala:57)
at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:184)
at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:76)
at org.apache.spark.streaming.api.java.JavaStreamingContext.<init>(JavaStreamingContext.scala:130)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:745)
I am using spark-streaming-kafka-0-8_2.11-2.3.0.jar for spark-submit command with —packages options.
I tried using spark-streaming-kafka-0-8-assembly_2.11-2.3.0.jar along with the --package and --jars options.
Python version: 2.7.5
I followed guide here: https://spark.apache.org/docs/2.3.0/streaming-kafka-0-8-integration.html
spark streaming kafka version 0-8 is deprecated in 2.3.0 but it is still there as per documentation.
My command looks like:
spark-submit --master spark://10.183.0.41:7077 --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.3.0 Kafka_test.py
For sure, there something changed in scala-underlying code of Spark.
Does anyone have faced the same issue?
https://spark.apache.org/docs/2.3.0/streaming-kafka-integration.html
support for kafka 0.8 is deprecated at spark2.3.0

Load Spark Dataframe into ElasticSearch using PySpark [duplicate]

I have a spark dataframe that I am trying to push to AWS Elasticsearch, but before that I was testing this sample code snippet to push to ES,
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('ES_indexer').getOrCreate()
df = spark.createDataFrame([{'num': i} for i in xrange(10)])
df = df.drop('_id')
df.write.format(
'org.elasticsearch.spark.sql'
).option(
'es.nodes', 'http://spark-data-push-adertadaltdpioy124.us-west-2.es.amazonaws.com'
).option(
'es.port', 9200
).option(
'es.resource', '%s/%s' % ('index_name', 'doc_type_name'),
).save()
I get an error saying,
java.lang.ClassNotFoundException: Failed to find data source: org.elasticsearch.spark.sql. Please find packages at http://spark.apache.org/third-party-projects.html
Any suggestions would be greatly appreciated.
Error Trace:
Traceback (most recent call last):
File "es_3.py", line 12, in <module>
'es.resource', '%s/%s' % ('index_name', 'doc_type_name'),
File "/usr/local/lib/python2.7/site-packages/pyspark/sql/readwriter.py", line 732, in save
self._jwrite.save()
File "/usr/local/lib/python2.7/site-packages/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/usr/local/lib/python2.7/site-packages/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/usr/local/lib/python2.7/site-packages/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o46.save.
: java.lang.ClassNotFoundException: Failed to find data source: org.elasticsearch.spark.sql. Please find packages at http://spark.apache.org/third-party-projects.html
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:657)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:245)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: org.elasticsearch.spark.sql.DefaultSource
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20$$anonfun$apply$12.apply(DataSource.scala:634)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20$$anonfun$apply$12.apply(DataSource.scala:634)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20.apply(DataSource.scala:634)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20.apply(DataSource.scala:634)
at scala.util.Try.orElse(Try.scala:84)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:634)
... 12 more
tl;dr Use pyspark --packages org.elasticsearch:elasticsearch-hadoop:7.2.0 and use format("es") to reference the connector.
Quoting Installation from the official documentation of the Elasticsearch for Apache Hadoop product:
Just like other libraries, elasticsearch-hadoop needs to be available in Spark’s classpath.
And later in Supported Spark SQL versions:
elasticsearch-hadoop supports both version Spark SQL 1.3-1.6 and Spark SQL 2.0 through two different jars: elasticsearch-spark-1.x-<version>.jar and elasticsearch-hadoop-<version>.jar
elasticsearch-spark-2.0-<version>.jar supports Spark SQL 2.0
That looks like an issue with the document (as they use two different versions of the jar file), but does mean that you have to use the proper jar file on the CLASSPATH of your Spark application.
And later in the same document:
Spark SQL support is available under org.elasticsearch.spark.sql package.
That simply says that the format (in df.write.format('org.elasticsearch.spark.sql')) is correct.
Further down the document you can find that you could even use an alias df.write.format("es") (!)
I found Apache Spark section in the project's repository on GitHub more readable and current.
Update: The current ES-hadoop package as of June 2020 is 7.7.1, so I used pyspark --packages org.elasticsearch:elasticsearch-hadoop:7.7.1 instead.
You have to mention the version of you elasticsearch db at the end of the package e.g --packages org.elasticsearch:elasticsearch-hadoop:(version). In my case it was org.elasticsearch:elasticsearch-hadoop:7.0.0.

Resources