Currently, I can parse a text file to a Spark DataFrame by way of the RDD API with the following code:
def row_parse_function(raw_string_input):
# Do parse logic...
return pyspark.sql.Row(...)
raw_rdd = spark_context.textFile(full_source_path)
# Convert RDD of strings to RDD of pyspark.sql.Row
row_rdd = raw_rdd.map(row_parse_function).filter(bool)
# Convert RDD of pyspark.sql.Row to Spark DataFrame.
data_frame = spark_sql_context.createDataFrame(row_rdd, schema)
Is this current approach ideal?
Or is there a better way to do this without using the older RDD API.
FYI, Spark 2.0.
Clay,
This is a good approach to load a file that has not specific format instead CSV, JSON, ORC, Parquet or from Database.
If you have any kind of specific logic to work on it, this is the best way to do that. Using RDD is for this kind of situation, when you need to run a specific logic in your data that is not trivial.
You can read here about the uses of the APIs of Spark. And you are in the situation of RDD is the best approach.
Related
What is the most efficient way to read only a subset of columns in spark from a parquet file that has many columns? Is using spark.read.format("parquet").load(<parquet>).select(...col1, col2) the best way to do that? I would also prefer to use typesafe dataset with case classes to pre-define my schema but not sure.
val df = spark.read.parquet("fs://path/file.parquet").select(...)
This will only read the corresponding columns. Indeed, parquet is a columnar storage and it is exactly meant for this type of use case. Try running df.explain and spark will tell you that only the corresponding columns are read (it prints the execution plan). explain would also tell you what filters are pushed down to the physical plan of execution in case you also use a where condition. Finally use the following code to convert the dataframe (dataset of rows) to a dataset of your case class.
case class MyData...
val ds = df.as[MyData]
At least in some cases getting dataframe with all columns + selecting a subset won't work. E.g. the following will fail if parquet contains at least one field with type that is not supported by Spark:
spark.read.format("parquet").load("<path_to_file>").select("col1", "col2")
One solution is to provide schema that contains only requested columns to load:
spark.read.format("parquet").load("<path_to_file>",
schema="col1 bigint, col2 float")
Using this you will be able to load a subset of Spark-supported parquet columns even if loading the full file is not possible. I'm using pyspark here, but would expect Scala version to have something similar.
Spark supports pushdowns with Parquet so
load(<parquet>).select(...col1, col2)
is fine.
I would also prefer to use typesafe dataset with case classes to pre-define my schema but not sure.
This could be an issue, as it looks like some optimizations don't work in this context Spark 2.0 Dataset vs DataFrame
Parquet is a columnar file format. It is exactly designed for these kind of use cases.
val df = spark.read.parquet("<PATH_TO_FILE>").select(...)
should do the job for you.
I'm trying to analyze data using Kinesis source in PySpark Structured Streaming on Databricks.
I created a Dataframe as shown below.
kinDF = spark.readStream.format("kinesis").("streamName", "test-stream-1").load()
Later I converted the data from base64 encoding as below.
df = kinDF.withColumn("xml_data", expr("CAST(data as string)"))
Now, I need to extract few fields from df.xml_data column using xpath. Can you please suggest any possible solution?
If I create a dataframe directly for these xml files as xml_df = spark.read.format("xml").options(rowTag='Consumers').load("s3a://bkt/xmldata"), I'm able to query using xpath:
xml_df.select("Analytics.Amount1").show()
But, not sure how to do extract elements similarly on a Spark Streaming dataframe where data is in text format.
Are there any xml functions to convert text data using schema? I saw an example for json data using from_json.
Is it possible to use spark.read on a dataframe column?
I need to find aggregated "Amount1" for every 5 minutes window.
Thanks for your help
You can use com.databricks.spark.xml.XmlReader to read xml data from column but requires an RDD, which means that you need to transform your df to RDD using df.rdd which may impact performance.
Below is untested code from spark java:
import com.databricks.spark.xml
xmlRdd = df = kinDF.select("xml_data").map(r -> r[0])
new XmlReader().xmlRdd(spark, xmlRdd)
I'm wondering if there's a way to combine the final result into a single file when using Spark? Here's the code I have:
conf = SparkConf().setAppName("logs").setMaster("local[*]")
sc = SparkContext(conf = conf)
logs_1 = sc.textFile('logs/logs_1.tsv')
logs_2 = sc.textFile('logs/logs_2.tsv')
url_1 = logs_1.map(lambda line: line.split("\t")[2])
url_2 = logs_2.map(lambda line: line.split("\t")[2])
all_urls = uls_1.intersection(urls_2)
all_urls = all_urls.filter(lambda url: url != "localhost")
all_urls.collect()
all_urls.saveAsTextFile('logs.csv')
The collect() method doesn't seem to be working (or I've misunderstood its purpose). Essentially, I need the 'saveAsTextFile' to output to a single file, instead of a folder with parts.
Please find below some suggestions:
collect() and saveAsTextFile() are actions that means they will collect the results on the driver node. Therefore is redundant to call both of them.
In your case you just need to store the data with saveAsTextFile() there is no need to call collect().
collect() returns an array of items (in your case you are not using the returned variable)
As Glennie and Akash suggested just use coalesce(1) to force one single partition. coalesce(1) will not cause shuffling hence is much more efficient.
In the given code you are using the RDD API of Spark I would suggest to use dataframes/datasets instead.
Please refer on the next links for further details over RDDs and dataframes:
Difference between DataFrame, Dataset, and RDD in Spark
https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html
Well, before you save, you can repartition once, like below:
all_urls.repartition(1).saveAsTextFile(resultPath)
then you would get just one result file.
You can store it in a parquet format. This is the best format suited for HDFS
all_urls.write.parquet("dir_name")
I'm using pySpark 2.3, trying to read a csv file that looks like that:
0,0.000476517230863068,0.0008178378961061477
1,0.0008506156837329876,0.0008467260987257776
But it doesn't work:
from pyspark import sql, SparkConf, SparkContext
print (sc.applicationId)
>> <property at 0x7f47583a5548>
data_rdd = spark.textFile(name=tsv_data_path).filter(x.split(",")[0] != 1)
And I get an error:
AttributeError: 'SparkSession' object has no attribute 'textFile'
Any idea how I should read it in pySpark 2.3?
First, textFile exists on the SparkContext (called sc in the repl), not on the SparkSession object (called spark in the repl).
Second, for CSV data, I would recommend using the CSV DataFrame loading code, like this:
df = spark.read.format("csv").load("file:///path/to/file.csv")
You mentioned in comments needing the data as an RDD. You are going to have significantly better performance if you can keep all of your operations on DataFrames instead of RDDs. However, if you need to fall back to RDDs for some reason you can do it like the following:
rdd = df.rdd.map(lambda row: row.asDict())
Doing this approach is better than trying to load it with textFile and parsing the CSV data yourself. If you use the DataFrame CSV loading then it will properly handle all the CSV edge cases for you like quoted fields. Also if only needed some of the columns, you could filter on the DataFrame before converting it to a RDD to avoid needing to bring all that extra data over into the python interpreter.
What is the most efficient way to read only a subset of columns in spark from a parquet file that has many columns? Is using spark.read.format("parquet").load(<parquet>).select(...col1, col2) the best way to do that? I would also prefer to use typesafe dataset with case classes to pre-define my schema but not sure.
val df = spark.read.parquet("fs://path/file.parquet").select(...)
This will only read the corresponding columns. Indeed, parquet is a columnar storage and it is exactly meant for this type of use case. Try running df.explain and spark will tell you that only the corresponding columns are read (it prints the execution plan). explain would also tell you what filters are pushed down to the physical plan of execution in case you also use a where condition. Finally use the following code to convert the dataframe (dataset of rows) to a dataset of your case class.
case class MyData...
val ds = df.as[MyData]
At least in some cases getting dataframe with all columns + selecting a subset won't work. E.g. the following will fail if parquet contains at least one field with type that is not supported by Spark:
spark.read.format("parquet").load("<path_to_file>").select("col1", "col2")
One solution is to provide schema that contains only requested columns to load:
spark.read.format("parquet").load("<path_to_file>",
schema="col1 bigint, col2 float")
Using this you will be able to load a subset of Spark-supported parquet columns even if loading the full file is not possible. I'm using pyspark here, but would expect Scala version to have something similar.
Spark supports pushdowns with Parquet so
load(<parquet>).select(...col1, col2)
is fine.
I would also prefer to use typesafe dataset with case classes to pre-define my schema but not sure.
This could be an issue, as it looks like some optimizations don't work in this context Spark 2.0 Dataset vs DataFrame
Parquet is a columnar file format. It is exactly designed for these kind of use cases.
val df = spark.read.parquet("<PATH_TO_FILE>").select(...)
should do the job for you.