How to check if a DataFrame was already cached/persisted before? - apache-spark

For spark's RDD object this is quite trivial as it exposes a getStorageLevel method, but DF does not seem to expose anything similar. anyone?

You can check weather a DataFrame is cached or not using Catalog (org.apache.spark.sql.catalog.Catalog) which comes in Spark 2.
Code example :
val sparkSession = SparkSession.builder.
master("local")
.appName("example")
.getOrCreate()
val df = sparkSession.read.csv("src/main/resources/sales.csv")
df.createTempView("sales")
//interacting with catalog
val catalog = sparkSession.catalog
//print the databases
catalog.listDatabases().select("name").show()
// print all the tables
catalog.listTables().select("name").show()
// is cached
println(catalog.isCached("sales"))
df.cache()
println(catalog.isCached("sales"))
Using the above code you can list all the tables and check weather a table is cached or not.
You can check the working code example here

Related

Spark - SparkSession access issue

I have a problem similar to one in
Spark java.lang.NullPointerException Error when filter spark data frame on inside foreach iterator
String_Lines.foreachRDD{line ->
line.foreach{x ->
// JSON to DF Example
val sparkConfig = SparkConf().setAppName("JavaKinesisWordCountASL").setMaster("local[*]").
set("spark.sql.warehouse.dir", "file:///C:/tmp")
val spark = SparkSession.builder().config(sparkConfig).orCreate
val outer_jsonData = Arrays.asList(x)
val outer_anotherPeopleDataset = spark.createDataset(outer_jsonData, Encoders.STRING())
spark.read().json(outer_anotherPeopleDataset).createOrReplaceTempView("jsonInnerView")
spark.sql("select name, address.city, address.state from jsonInnerView").show(false)
println("Current String #"+ x)
}
}
#thebluephantom did explain it to the point. I have my code in foreachRDD now, but still it doesn't work. This is Kotlin and I am running it in my local laptop with IntelliJ. Somehow it's not picking sparksession as I understand after reading all blogs. If I delete "spark.read and spark.sql", everything else works OK. What should I do to fix this?
If I delete "spark.read and spark.sql", everything else works OK
If you delete those, you're not actually making Spark do anything, only defining what Spark actions should happen (Spark actions are lazy)
Somehow it's not picking sparksession as I understand
It's "picking it up" just fine. The error is happening because it's picking up a brand new SparkSession. You should already have defined one of these outside of the forEachRDD method, but if you try to reuse it, you might run into different issues
Assuming String_Lines is already a Dataframe. There's no point in looping over all of its RDD data and trying to create brand new SparkSession. Or if it's a DStream, convert it to Streaming Dataframe instead...
That being said, you should be able to immediately select data from it
// unclear what the schema of this is
val selected = String_Lines.selectExpr("name", "address.city", "address.state")
selected.show(false)
You may need to add a get_json_object function in there if you're trying to parse strings to JSON
I am able to solve it finally.
I modified code like this.... Its clean and working.
This is String_Lines data type
val String_Lines: JavaDStream<String>
String_Lines.foreachRDD { x ->
val df = spark.read().json(x)
df.printSchema()
df.show(2,false)
}
Thanks,
Chandra

How to store data from a dataframe in a variable to use as a parameter in a select in cassandra?

I have a Spark Structured Streaming application. The application receives data from kafka, and should use these values ​​as a parameter to process data from a cassandra database. My question is how do I use the data that is in the input dataframe (kafka), as "where" parameters in cassandra "select" without taking the error below:
Exception in thread "main" org.apache.spark.sql.AnalysisException: Queries with streaming sources must be executed with writeStream.start();
This is my df input:
val df = spark
.readStream
.format("kafka")
.options(
Map("kafka.bootstrap.servers"-> kafka_bootstrap,
"subscribe" -> kafka_topic,
"startingOffsets"-> "latest",
"fetchOffset.numRetries"-> "5",
"kafka.group.id"-> groupId
))
.load()
I get this error whenever I try to store the dataframe values ​​in a variable to use as a parameter.
This is the method I created to try to convert the data into variables. With that the spark give the error that I mentioned earlier:
def processData(messageToProcess: DataFrame): DataFrame = {
val messageDS: Dataset[Message] = messageToProcess.as[Message]
val listData: Array[Message] = messageDS.collect()
listData.foreach(x => println(x.country))
val mensagem = messageToProcess
mensagem
}
When you need to use data in Kafka to query data in Cassandra, then such operation is a typical join between two datasets - you don't need to call .collect to find entries, you just do the join. And it's quite typical thing - to enrich data in Kafka with data from the external dataset, and Cassandra provides low-latency operations.
Your code could look as following (you'll need to configure so-called DirectJoin, see link below):
import spark.implicits._
import org.apache.spark.sql.cassandra._
val df = spark.readStream.format("kafka")
.options(Map(...)).load()
... decode data in Kafka into columns
val cassdata = spark.read.cassandraFormat("table", "keyspace").load
val joined = df.join(cassdata, cassdata("pk") === df("some_column"))
val processed = ... process joined data
val query = processed.writeStream.....output data somewhere...start()
query.awaitTermination()
I have detailed blog post on how to perform efficient joins with data in Cassandra.
As the error message suggest, you have to use writeStream.start() in order to execute a Structured Streaming query.
You can't use the same actions you use for batch dataframes (like .collect(), .show() or .count()) on streaming dataframes, see the Unsupported Operations section of the Spark Structured Streaming documentation.
In your case, you are trying to use messageDS.collect() on a streaming dataset, which is not allowed. To achieve this goal you can use a foreachBatch output sink to collect the rows you need at each microbatch:
streamingDF.writeStream.foreachBatch { (microBatchDf: DataFrame, batchId: Long) =>
// Now microBatchDf is no longer a streaming dataframe
// you can check with microBatchDf.isStreaming
val messageDS: Dataset[Message] = microBatchDf.as[Message]
val listData: Array[Message] = messageDS.collect()
listData.foreach(x => println(x.country))
// ...
}

Create index in Ignite table when save dataframe from pyspark

I save Spark dataframe to Apache Ignite table with this code:
df.write\
.format("ignite")\
.option("table","REPORT")\
.option("primaryKeyFields", ', '.join(map(str, df.schema.names[:-1])))\
.option("config",configFile)\
.option("compression", "gzip")\
.mode("overwrite")\
.save()
But, I cannot find how create index on field with this owerwrite-saving.
I need this, but on .save() operation:
CREATE INDEX REPORT_FIELD_IDX ON PUBLIC.REPORT (FIELD)
It's pretty simple to do using the syntax like next:
CREATE INDEX IF NOT EXISTS AGE_IDX ON "PUBLIC".Person (AGE)
In case if a new table wasn't created then IF NOT EXISTS will work and nothing will be done. Otherwise, the index will be created.
It can be run using any SQL tool that can be used with Ignite (webconsole, visor, sqlline, jdbc, odbc, etc) but I guess that you are going to do it from Spark job. So you can try to use IgniteSparkSession or IgniteRDD to run SQL over Ignite:
IgniteSparkSession igniteSession = IgniteSparkSession.builder()
.appName("Spark Ignite example")
.igniteConfig(configPath)
.getOrCreate();
igniteSession.sqlContext().sql("CREATE INDEX IF NOT EXISTS AGE_IDX ON \"PUBLIC\".Person (AGE)");
or
val cacheRdd = igniteContext.fromCache("partitioned")
val result = cacheRdd.sql(
"CREATE INDEX IF NOT EXISTS AGE_IDX ON \"PUBLIC\".Person (AGE)")
No, you can't do that when saving DataFrame with Spark. Creating a table and creating an index are 2 different operations.
Here are all the options for DataFrame saving into Ignite, and as you can see, there is no option for index creation.

Spark save dataframe metadata and reuse it

When I read a dataset with a lot of files (in my case from google cloud storage), spark.read works a lot of time before the first manipulation.
I'm not sure what it does but I guess it maps the files and sample them to infer the schema.
My question is, is there an option to save this metadata collected about the dataframe and reuse it in other work on the dataset.
-- UPDATE --
The data is arranged like this:
gs://bucket-name/table_name/day=yyyymmdd/many_json_files
When I run: df = spark.read.json("gs://bucket-name/table_name") That's take a lot of time. I wish I could do the following:
df = spark.read.json("gs://bucket-name/table_name")
df.saveMetadata("gs://bucket-name/table_name_metadata")
And in another session:
df = spark.read.metadata("gs://bucket-name/table_name_metadata").‌​json("gs://bucket-na‌​me/table_name")
...
<some df manipulation>
...
We just need infer the schema once and reuse it for the later files, if we have a lot of file which has the same schema. like this.
val df0 = spark.read.json("first_file_we_wanna_spark_to_info.json")
val schema = df0.schema
// for other files
val df = spark.read.schema(schema).json("donnot_info_schema.json")

How to parse the streaming XML into dataframe?

I'm consuming the XML file from kafka topic .Can anyone tell me how to parse the XML into dataframe.
val df = sqlContext.read
.format("com.databricks.spark.xml")
//.option("rowTag","ns:header")
// .options(Map("rowTag"->"ntfyTrns:payloadHeader","rowTag"->"ns:header"))
.option("rowTag","ntfyTrnsDt:notifyTransactionDetailsReq")
.load("/home/ubuntu/SourceXML.xml")
df.show
df.printSchema()
df.select(col("ns:header.ns:captureSystem")).show()
I able to exact the information information from XML .I dont know how to pass or convert or load the RDD[String] from kafka topic to sql read API.
Thanks!
I am facing the same situation, doing some research I found that some people is using this method to convert the RDD to a DataFrame using the following code as shown here:
val wrapped = rdd.map(xml => s"""<a>$xml</a>""")
val df = new XmlReader().xmlRdd(sqlContext, wrapped)
You just have to obtain the RDD from the DStream, I am doing this using pyspark
streamElement = ssc.textFileStream("s3n://your_path")
streamElement.foreachRDD(process)
where process method has the following structure, so you can do everything with your rdds
def process(time, rdd):
return value

Resources