Spark Rename Dataframe Columns - apache-spark

I have 2 files in HDFS - one is a csv file with no header and one is a list of column names. I'm wondering if it's possible to assign the column names to the other data frame without actually typing them out like described here.
I'm looking for something like this:
val df = sqlContext.read.format("com.databricks.spark.csv").option("delimiter", "\t").load("/user/training_data.txt")
val header = sqlContext.read.format("com.databricks.spark.csv").option("delimiter", ",").load("/user/col_names.txt")
df.columns(header)
Is this possible?

One way could be to read the header file using scala.io like this:
import scala.io.Source
val header = Source.fromFile("/user/col_names.txt").getLines.map(_.split(","))
val newNames = header.next
Then, read the CSV file using spark-csv as you do, specifying no header and converting the names like:
val df = spark.read.format("com.databricks.spark.csv")
.option("header", "false").option("delimiter", "\t")
.load("/user/training_data.txt").toDF(newNames: _*)
notice the _* type annotation.
The _* is type ascription in Scala (meaning that we can give a list as argument, and it will still work, applying the same function to each member of the-said list)
more here: What is the purpose of type ascriptions in Scala?

Related

How to handle NullPointerException while reading, filtering and counting the lines of CSV files using SparkSession?

I'm trying to read the CSV files stored on HDFS using sparkSession and count the number of lines and print the value on the console. However, I'm constantly getting NullPointerException while calculating the count. Below is the code snippet,
val validEmployeeIds = Set("12345", "6789")
val count = sparkSession
.read
.option("escape", "\"")
.option("quote", "\"")
.csv(inputPath)
.filter(row => validEmployeeIds.contains(row.getString(0)))
.distinct()
.count()
println(count)
I'm getting an NPE exactly at .filter condition. If I remove .filter in the code, it runs fine and prints the count. How can I handle this NPE?
The inputPath is a folder that contains contains multiple CSV files. Each CSV file has two columns, one represents Id and other represents name of the employee. A sample CSV extract is below:
12345,Employee1
AA888,Employee2
I'm using Spark version 2.3.1.
Try using isin function.
import spark.implicits._
val validEmployeeIds = List("12345", "6789")
val df = // Read CSV
df.filter('_c0.isin(validEmployeeIds:_*)).distinct().count()

How to pass more than one column as a parameter to Spark dataframe

I want to pass more than one column name as a parameter to dataframe.
val readData = spark.sqlContext
.read.format("csv")
.option("delimiter",",")
.schema(Schema)
.load("emp.csv")
val cols_list1 = "emp_id,emp_dt"
val cols_list2 = "emp_num"
val RemoveDupli_DF = readData
.withColumn("rnk", row_number().over(Window.partitionBy(s"$cols_list1").orderBy(s"$cols_list2") ))
Above code is working, if i have one column name , whereas with two or more columns, its giving below error.
Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve 'emp_id,emp_dt'
Using Scala 2.x version.
The partitionBy method as multiple signatures:
def partitionBy(colName: String, colNames: String*)
// or
def partitionBy(cols: Column*)
Your code is providing the list of columns as a single string which will fail because there is no column called emp_id,emp_dt. Hence, you get the error message.
You could define your column names (as Strings) in a collection
val cols_seq1 = Seq("emp_id","emp_dt")
and then call partitionsBy like this:
Window.partitionBy(cols_seq1: _*)
The notation : _* tells the compiler to pass each element of cols_seq1 as its own argument into the partitionBy call rather than all of it as a single argument.
As an alternative you could also just use
Window.partitionBy("emp_id", "emp_dt")

Spark infer schema with limit during a read.csv

I'd like to infer a Spark.DataFrame schema from a directory of CSV files using a small subset of the rows (say limit(100)).
However, setting inferSchema to True means that the Input Size / Records for the FileScanRDD seems to always be equal to the number of rows in all the CSV files.
Is there a way to make the FileScan more selective, such that Spark looks at fewer rows when inferring a schema?
Note: setting the samplingRatio option to be < 1.0 does not have the desired behaviour, though it is clear that inferSchema uses only the sampled subset of rows.
You could read a subset of your input data into a dataSet of String.
The CSV method allows you to pass this as a parameter.
Here is a simple example (I'll leave reading the sample of rows from the input file to you):
val data = List("1,2,hello", "2,3,what's up?")
val csvRDD = sc.parallelize(data)
val df = spark.read.option("inferSchema","true").csv(csvRDD.toDS)
df.schema
When run in spark-shell, the final line from the above prints (I reformatted it for readability):
res4: org.apache.spark.sql.types.StructType =
StructType(
StructField(_c0,IntegerType,true),
StructField(_c1,IntegerType,true),
StructField(_c2,StringType,true)
)
Which is the correct Schema for my limited input data set.
Assuming you are only interested in the schema, here is a possible approach based on cipri.l's post in this link
import org.apache.spark.sql.execution.datasources.csv.{CSVOptions, TextInputCSVDataSource}
def inferSchemaFromSample(sparkSession: SparkSession, fileLocation: String, sampleSize: Int, isFirstRowHeader: Boolean): StructType = {
// Build a Dataset composed of the first sampleSize lines from the input files as plain text strings
val dataSample: Array[String] = sparkSession.read.textFile(fileLocation).head(sampleSize)
import sparkSession.implicits._
val sampleDS: Dataset[String] = sparkSession.createDataset(dataSample)
// Provide information about the CSV files' structure
val firstLine = dataSample.head
val extraOptions = Map("inferSchema" -> "true", "header" -> isFirstRowHeader.toString)
val csvOptions: CSVOptions = new CSVOptions(extraOptions, sparkSession.sessionState.conf.sessionLocalTimeZone)
// Infer the CSV schema based on the sample data
val schema = TextInputCSVDataSource.inferFromDataset(sparkSession, sampleDS, Some(firstLine), csvOptions)
schema
}
Unlike GMc's answer from above, this approach tries to directly infer the schema the same way the DataFrameReader.csv() does in the background (but without going through the effort of building an additional Dataset with that schema, that we would then only use to retrieve the schema back from it)
The schema is inferred based on a Dataset[String] containing only the first sampleSize lines from the input files as plain text strings.
When trying to retrieve samples from data, Spark has only 2 types of methods:
Methods that retrieve a given percentage of the data. This operation takes random samples from all partitions. It benefits from higher parallelism, but it must read all the input files.
Methods that retrieve a specific number of rows. This operation must collect the data on the driver, but it could read a single partition (if the required row count is low enough)
Since you mentioned you want to use a specific small number of rows and since you want to avoid touching all the data, I provided a solution based on option 2
PS: The DataFrameReader.textFile method accepts paths to files, folders and it also has a varargs variant, so you could pass in one or more files or folders.

Call inferSchema directly after the load is done with spark-csv

Is there a way that I can directly call inferSchema after load is done?
Ex:
val df = sqlContext.read.format("com.databricks.spark.csv").
option("header", "true").
option("inferSchema", "false").load(location)
df.schema
I want to call some thing like below:
val newdf = df.inferSchema()
newdf.printSchema()
Regards
It's not possible unless you define a new schema and apply it to the new DataFrame on creation.
You can also read the schema from using the csv source and store it to use afterwards but this will scan the data either way.
You haven't inferred a schema, spark-csv considers every column as a string.

How to import multiple csv files in a single load?

Consider I have a defined schema for loading 10 csv files in a folder. Is there a way to automatically load tables using Spark SQL. I know this can be performed by using an individual dataframe for each file [given below], but can it be automated with a single command rather than pointing a file can I point a folder?
df = sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true")
.load("../Downloads/2008.csv")
Use wildcard, e.g. replace 2008 with *:
df = sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true")
.load("../Downloads/*.csv") // <-- note the star (*)
Spark 2.0
// these lines are equivalent in Spark 2.0
spark.read.format("csv").option("header", "true").load("../Downloads/*.csv")
spark.read.option("header", "true").csv("../Downloads/*.csv")
Notes:
Replace format("com.databricks.spark.csv") by using format("csv") or csv method instead. com.databricks.spark.csv format has been integrated to 2.0.
Use spark not sqlContext
Ex1:
Reading a single CSV file. Provide complete file path:
val df = spark.read.option("header", "true").csv("C:spark\\sample_data\\tmp\\cars1.csv")
Ex2:
Reading multiple CSV files passing names:
val df=spark.read.option("header","true").csv("C:spark\\sample_data\\tmp\\cars1.csv", "C:spark\\sample_data\\tmp\\cars2.csv")
Ex3:
Reading multiple CSV files passing list of names:
val paths = List("C:spark\\sample_data\\tmp\\cars1.csv", "C:spark\\sample_data\\tmp\\cars2.csv")
val df = spark.read.option("header", "true").csv(paths: _*)
Ex4:
Reading multiple CSV files in a folder ignoring other files:
val df = spark.read.option("header", "true").csv("C:spark\\sample_data\\tmp\\*.csv")
Ex5:
Reading multiple CSV files from multiple folders:
val folders = List("C:spark\\sample_data\\tmp", "C:spark\\sample_data\\tmp1")
val df = spark.read.option("header", "true").csv(folders: _*)
Note that you can use other tricks like :
-- One or more wildcard:
.../Downloads20*/*.csv
-- braces and brackets
.../Downloads201[1-5]/book.csv
.../Downloads201{11,15,19,99}/book.csv
Reader's Digest: (Spark 2.x)
For Example, if you have 3 directories holding csv files:
dir1, dir2, dir3
You then define paths as a string of comma delimited list of paths as follows:
paths = "dir1/,dir2/,dir3/*"
Then use the following function and pass it this paths variable
def get_df_from_csv_paths(paths):
df = spark.read.format("csv").option("header", "false").\
schema(custom_schema).\
option('delimiter', '\t').\
option('mode', 'DROPMALFORMED').\
load(paths.split(','))
return df
By then running:
df = get_df_from_csv_paths(paths)
You will obtain in df a single spark dataframe containing the data from all the csvs found in these 3 directories.
===========================================================================
Full Version:
In case you want to ingest multiple CSVs from multiple directories you simply need to pass a list and use wildcards.
For Example:
if your data_path looks like this:
's3://bucket_name/subbucket_name/2016-09-*/184/*,
s3://bucket_name/subbucket_name/2016-10-*/184/*,
s3://bucket_name/subbucket_name/2016-11-*/184/*,
s3://bucket_name/subbucket_name/2016-12-*/184/*, ... '
you can use the above function to ingest all the csvs in all these directories and subdirectories at once:
This would ingest all directories in s3 bucket_name/subbucket_name/ according to the wildcard patterns specified. e.g. the first pattern would look in
bucket_name/subbucket_name/
for all directories with names starting with
2016-09-
and for each of those take only the directory named
184
and within that subdirectory look for all csv files.
And this would be executed for each of the patterns in the comma delimited list.
This works way better than union..
Using Spark 2.0+, we can load multiple CSV files from different directories using
df = spark.read.csv(['directory_1','directory_2','directory_3'.....], header=True). For more information, refer the documentation
here
val df = spark.read.option("header", "true").csv("C:spark\\sample_data\\*.csv)
will consider files tmp, tmp1, tmp2, ....

Resources