Enviroment details
spark version : 3.x
Python version 3.8 and java version 8
azure-eventhubs-spark_2.12-2.3.17.jar
import json
from pyspark.sql import SparkSession
#the below command getOrCreate() uses the SparkSession shared across the jobs instead of using one SparkSession per job.
spark = SparkSession.builder.appName('ntorq_eventhub_load').getOrCreate()
#ntorq adls checkpoint location.
ntorq_connection_string = "connection-string"
ehConf = {}
ehConf['eventhubs.connectionString'] = spark.sparkContext._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt(ntorq_connection_string)
# ehConf['eventhubs.connectionString'] = ntorq_connection_string
ehConf['eventhubs.consumerGroup'] = "$default"
OFFSET_START = "-1" # the beginning
OFFSET_END = "#latest"
# Create the positions
startingEventPosition = {
"offset": OFFSET_START ,
"seqNo": -1, #not in use
"enqueuedTime": None, #not in use
"isInclusive": True
}
endingEventPosition = {
"offset": OFFSET_END, #not in use
"seqNo": -1, #not in use
"enqueuedTime": None,
"isInclusive": True
}
# Put the positions into the Event Hub config dictionary
ehConf["eventhubs.startingPosition"] = json.dumps(startingEventPosition)
ehConf["eventhubs.endingPosition"] = json.dumps(endingEventPosition)
df = spark \
.readStream \
.format("eventhubs") \
.options(**ehConf) \
.load() \
.selectExpr("cast(body as string) as body_str")
df.writeStream \
.format("console") \
.start()
error
21/04/25 20:17:53 WARN Utils: Your hostname,resolves to a loopback address: 127.0.0.1; using 192.168.1.202 instead (on interface en0)
21/04/25 20:17:53 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
21/04/25 20:17:53 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Traceback (most recent call last):
File "/Users/PycharmProjects/pythonProject/test.py", line 12, in <module>
ehConf['eventhubs.connectionString'] = spark.sparkContext._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt(ntorq_connection_string)
TypeError: 'JavaPackage' object is not callable
Code is working fine on databricks environment but unable to consume all messages from eventhub I tried clearing the default checkpointing folders before running every time but still facing the issue, so want to try on the local system.
When trying on local environment facing JavaPackage issue.
Appreciate any help.
thank you
You need to add EventHubs package when creating session:
park = SparkSession.builder.appName('ntorq_eventhub_load')\
.config("spark.jars.packages", "com.microsoft.azure:azure-eventhubs-spark_2.12:2.3.18")\
.getOrCreate()
Related
I'm working in a virtual machine. I run a Spark Streaming job which I basically copied from a Databricks tutorial.
%pyspark
query = (
streamingCountsDF
.writeStream
.format("memory") # memory = store in-memory table
.queryName("counts") # counts = name of the in-memory table
.outputMode("complete") # complete = all the counts should be in the table
.start()
)
Py4JJavaError: An error occurred while calling o101.start.
: java.net.ConnectException: Call From VirtualBox/127.0.1.1 to localhost:8998 failed on connection exception: java.net.ConnectException:
I checked and there is no service listening on port 8998. I learned that this port is associated with the Apache Livy-server which I am not using. Can someone point me into the right direction?
Ok, so I fixed this issue. First, I added 'file://' when specifying the input folder. Second, I added a checkpoint location. See code below:
inputFolder = 'file:///home/sallos/tmp/'
streamingInputDF = (
spark
.readStream
.schema(schema)
.option("maxFilesPerTrigger", 1) # Treat a sequence of files as a stream by picking one file at a time
.csv(inputFolder)
)
streamingCountsDF = (
streamingInputDF
.groupBy(
streamingInputDF.SrcIPAddr,
window(streamingInputDF.Datefirstseen, "30 seconds"))
.sum('Bytes').withColumnRenamed("sum(Bytes)", "sum_bytes")
)
query = (
streamingCountsDF
.writeStream.format("memory")\
.queryName("sumbytes")\
.outputMode("complete")\
.option("checkpointLocation","file:///home/sallos/tmp_checkpoint/")\
.start()
)
When attempting to readStream data fron Azure Event Hub with Databricks on Apache Spark I get the error
AttributeError: 'str' object has no attribute '_jvm'
The details of the error is as follows:
----> 8 ehConf['eventhubs.connectionString'] = sparkContext._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt(connectionString)
The code is as follows:
sparkContext = ""
connectionString = 'Endpoint=sb://namespace.servicebus.windows.net/;SharedAccessKeyName=both4;SharedAccessKey=adfdMyKeyIGBKYBs=;EntityPath=hubv5'
# Source with default settings
connectionString = connectionString
ehConf = {}
ehConf['eventhubs.connectionString'] = sparkContext._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt(connectionString)
streaming_df = spark \
.readStream \
.format("eventhubs") \
.options(**ehConf) \
.load()
Has anyone come across this error and found a solution?
It shouldn't be the sparkContext, but just sc:
ehConf['eventhubs.connectionString'] = sc._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt(connectionString)
P.S. But it's just easier to use built-in Kafka connector with EventHubs - you don't need to install anything, and it's more performant...
I'm trying to read an Excel file from HDFS location using Crealytics package and keep getting an error (Caused by: java.lang.ClassNotFoundException:org.apache.spark.sql.connector.catalog.TableProvider). My code is below. Any tips? When running the below code, the spark session initiates fine and the Crealytics package loads without error. The error only comes when running the "spark.read" code. The file location I'm using is accurate.
def spark_session(spark_conf):
conf = SparkConf()
for (key, val) in spark_conf.items():
conf.set(key, val)
spark = SparkSession \
.builder \
.enableHiveSupport() \
.config(conf=conf) \
.getOrCreate()
return spark
spark_conf = {"spark.executor.memory": "16g",
"spark.yarn.executor.memoryOverhead": "3g",
"spark.dynamicAllocation.initialExecutors": 2,
"spark.driver.memory": "16g",
"spark.kryoserializer.buffer.max": "1g",
"spark.driver.cores": 32,
"spark.executor.cores": 8,
"spark.yarn.queue": "adhoc",
"spark.app.name": "CDSW_basic",
"spark.dynamicAllocation.maxExecutors": 32,
"spark.jars.packages": "com.crealytics:spark-excel_2.12:0.14.0"
}
df = spark.read.format("com.crealytics.spark.excel") \
.option("useHeader", "true") \
.load("/user/data/Block_list.xlsx")
I've also tried loading it outside of the session function with the code below yielding the same error once I try to read the file.
crealytics_driver_loc = "com.crealytics:spark-excel_2.12:0.14.0"
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages ' + crealytics_driver_loc + ' pyspark-shell'
Looks like I'm answering my own question. After a great deal of fiddling around, I've found that using an old version of crealytics works with my setup, though I'm uncertain why. The package that worked was version 13 ("com.crealytics:spark-excel_2.12:0.13.0"), though the newest is version 15.
Hi I'm trying to create a neo4j sink using pyspark and kafka, but for some reason this sink is creating duplicates in neo4j and I'm not sure why this is happening. I am expecting to get only one node, but it looks like it's creating 4. If someone has an idea, please let me know.
Kafka producer code:
from kafka import KafkaProducer
import json
producer = KafkaProducer(bootstrap_servers='10.0.0.38:9092')
message = {
'test_1': 'test_1',
'test_2': 'test_2'
}
producer.send('test_topic', json.dumps(message).encode('utf-8'))
producer.close()
Kafka consumer code:
from kafka import KafkaConsumer
import findspark
from py2neo import Graph
import json
findspark.init()
from pyspark.sql import SparkSession
class ForeachWriter:
def open(self, partition_id, epoch_id):
neo4j_uri = '' # neo4j uri
neo4j_auth = ('', '') # neo4j user, password
self.graph = Graph(neo4j_uri, auth=neo4j_auth)
return True
def process(self, msg):
msg = json.loads(msg.value.decode('utf-8'))
self.graph.run("CREATE (n: MESSAGE_RECEIVED) SET n.key = '" + str(msg).replace("'", '"') + "'")
raise KeyError('received message: {}. finished creating node'.format(msg))
spark = SparkSession.builder.appName('test-consumer') \
.config('spark.executor.instances', 1) \
.getOrCreate()
ds1 = spark.readStream \
.format('kafka') \
.option('kafka.bootstrap.servers', '10.0.0.38:9092') \
.option('subscribe', 'test_topic') \
.load()
query = ds1.writeStream.foreach(ForeachWriter()).start()
query.awaitTermination()
neo4j graph after running code
After doing some searching, I found this snippet of text from Stream Processing with Apache Spark: Mastering Structured Streaming and Spark Streaming on chapter 11 p151 after describing open, process, and close for ForeachWriter:
This contract is part of the data delivery semantics because it allows us to remove duplicated partitions that might already have been sent to the sink but are reprocessed by Structured Streaming as part of a recovery scenario. For that mechanism to properly work, the sink must implement some persistent way to remember the partition/version combinations that it has already seen.
On another note from the spark website: https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html (see section on Foreach).
Note: Spark does not guarantee same output for (partitionId, epochId), so deduplication cannot be achieved with (partitionId, epochId). e.g. source provides different number of partitions for some reasons, Spark optimization changes number of partitions, etc. See SPARK-28650 for more details. If you need deduplication on output, try out foreachBatch instead.
It seems like I need to implement a check for uniqueness because Structured Streaming automatically reprocesses partitions in case of a fail if I am to use ForeachWriter, otherwise I have to switch to foreachBatch instead.
I am trying to create a dataframe using the following code in Spark 2.0. While executing the code in Jupyter/Console, I am facing the below error. Can someone help me how to get rid of this error?
Error:
Py4JJavaError: An error occurred while calling o34.csv.
: java.lang.RuntimeException: Multiple sources found for csv (org.apache.spark.sql.execution.datasources.csv.CSVFileFormat, com.databricks.spark.csv.DefaultSource15), please specify the fully qualified class name.
at scala.sys.package$.error(package.scala:27)
Code:
from pyspark.sql import SparkSession
if __name__ == "__main__":
session = SparkSession.builder.master('local')
.appName("RealEstateSurvey").getOrCreate()
df = session \
.read \
.option("inferSchema", value = True) \
.option('header','true') \
.csv("/home/senthiljdpm/RealEstate.csv")
print("=== Print out schema ===")
session.stop()
The error is because you must have both libraries (org.apache.spark.sql.execution.datasources.csv.CSVFileFormat and com.databricks.spark.csv.DefaultSource) in your classpath. And spark got confused which one to choose.
All you need is tell spark to use com.databricks.spark.csv.DefaultSource by defining format option as
df = session \
.read \
.format("com.databricks.spark.csv") \
.option("inferSchema", value = True) \
.option('header','true') \
.csv("/home/senthiljdpm/RealEstate.csv")
Another alternative is to use load as
df = session \
.read \
.format("com.databricks.spark.csv") \
.option("inferSchema", value = True) \
.option('header','true') \
.load("/home/senthiljdpm/RealEstate.csv")
If anyone faced a similar issue in Spark Java, it could be because you have multiple versions of the spark-sql jar in your classpath. Just FYI.
I had faced the same issue, and got fixed when changed the Hudi version used in pom.xml from 9.0 to 11.1