Spark dataframe to csv file saving as part - apache-spark

Hi i have this code that saves dataframe into cvs locally on system and i keep getting a directory names myfile.csv/part-0000.gz, part-0001.gz .... i just want a cvs file. here my code
String current = LocalDateTime.now().format(DateTimeFormatter.ISO_LOCAL_DATE_TIME);
groupedMessages.write()
.format("com.databricks.spark.csv")
.option("header", "true")
.option("codec", "org.apache.hadoop.io.compress.GzipCodec")
.save("/finance_reports/myfile.csv");
}

Related

spark data read with quoted string

i am having the csv data file as given below
each line is terminated by a Carriage Return('\r')
but certain value of text are multilined field having line delimiter as line feed ('\n'). how to use spark data source api option to handle these issue.
with enter image description here
Spark 2.2.0 has added support for parsing multi-line CSV files. You can use following to read a csv with multi-line:
val df = spark.read
.option("sep", ",")
.option("quote", "")
.option("multiLine", "true")
.option("inferSchema", "true")
.csv(file_name)

How can you go about creating a csv file from an empty Dataset<Row> in spark 2.1 with headers

Spark 2.1 has default behaviour of writing empty files while creating a CSV from a Dataset
How can you go about creating a csv file with headers ?
This is what i am using to write the file
dataFrame.repartition(NUM_PARTITIONS).write()
.option("header", "true")
.option("delimiter", "\t")
.option("overwrite", "true")
.option("nullValue", "null")
.option("codec", "org.apache.hadoop.io.compress.GzipCodec")
.csv("some/path");

Spark Session read mulitple files instead of using pattern

I'm trying to read couple of CSV files using SparkSession from a folder on HDFS ( i.e I don't want to read all the files in the folder )
I get the following error while running (code at the end):
Path does not exist:
file:/home/cloudera/works/JavaKafkaSparkStream/input/input_2.csv,
/home/cloudera/works/JavaKafkaSparkStream/input/input_1.csv
I don't want to use the pattern while reading, like /home/temp/*.csv, reason being in future I have logic to pick only one or two files in the folder out of 100 CSV files
Please advise
SparkSession sparkSession = SparkSession
.builder()
.appName(SparkCSVProcessors.class.getName())
.master(master).getOrCreate();
SparkContext context = sparkSession.sparkContext();
context.setLogLevel("ERROR");
Set<String> fileSet = Files.list(Paths.get("/home/cloudera/works/JavaKafkaSparkStream/input/"))
.filter(name -> name.toString().endsWith(".csv"))
.map(name -> name.toString())
.collect(Collectors.toSet());
SQLContext sqlCtx = sparkSession.sqlContext();
Dataset<Row> rawDataset = sparkSession.read()
.option("inferSchema", "true")
.option("header", "true")
.format("com.databricks.spark.csv")
.option("delimiter", ",")
//.load(String.join(" , ", fileSet));
.load("/home/cloudera/works/JavaKafkaSparkStream/input/input_2.csv, " +
"/home/cloudera/works/JavaKafkaSparkStream/input/input_1.csv");
UPDATE
I can iterate the files and do an union as below. Please recommend if there is a better way ...
Dataset<Row> unifiedDataset = null;
for (String fileName : fileSet) {
Dataset<Row> tempDataset = sparkSession.read()
.option("inferSchema", "true")
.option("header", "true")
.format("csv")
.option("delimiter", ",")
.load(fileName);
if (unifiedDataset != null) {
unifiedDataset= unifiedDataset.unionAll(tempDataset);
} else {
unifiedDataset = tempDataset;
}
}
Your problem is that you are creating a String with the value:
"/home/cloudera/works/JavaKafkaSparkStream/input/input_2.csv,
/home/cloudera/works/JavaKafkaSparkStream/input/input_1.csv"
Instead passing two filenames as parameters, which should be done by:
.load("/home/cloudera/works/JavaKafkaSparkStream/input/input_2.csv",
"/home/cloudera/works/JavaKafkaSparkStream/input/input_1.csv");
The comma has to be outside the strin and you should have two values, instead of one String.
From my understanding you want to read multiple files from HDFS without using regex like "/path/*.csv". what you are missing is each path needs to be separately with quotes and separated by ","
You can read using code as below, ensure that you have added SPARK CSV library :
sqlContext.read.format("csv").load("/home/cloudera/works/JavaKafkaSparkStream/input/input_1.csv","/home/cloudera/works/JavaKafkaSparkStream/input/input_2.csv")
Pattern can be helpful as well.
You want to select two files at time.
If they are sequencial then you could do something like
.load("/home/cloudera/works/JavaKafkaSparkStream/input/input_[1-2].csv")
if more files then just do input_[1-5].csv

Apache Spark Dataframe - Load data from nth line of a CSV file

I would like to process a huge order CSV file (5GB), with some metadata rows at the start of file.
Header columns are represented in row 4 (starting with "h,") followed by another metadata row, describing optionality. Data rows start with "d,"
m,Version,v1.0
m,Type,xx
m,<OtherMetaData>,<...>
h,Col1,Col2,Col3,Col4,Col5,.............,Col100
m,Mandatory,Optional,Optional,...........,Mandatory
d,Val1,Val2,Val3,Val4,Val5,.............,Val100
Is it possible to skip a specified number of rows when loading the file and use 'inferSchema' option for DataSet?
Dataset<Row> df = spark.read()
.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("\home\user\data\20170326.csv");
Or do I need to define two different Datasets and use "except(Dataset other)" to exclude the dataset with rows to be ignored?
You can try setting the "comment" option to "m", effectively telling the csv reader to skip lines beginning with the "m" character.
df = spark.read()
.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.option("comment", "m")
.load("\home\user\data\20170326.csv")

Reading csv file as data frame in spark

I am new to spark and I have a csv file with over 1500 columns. I like to load it as a dataframe in spark. I am not sure how to do this.
Thanks
Use this project https://github.com/databricks/spark-csv
There is an example from the front page:
import org.apache.spark.sql.SQLContext
val sqlContext = new SQLContext(sc)
val df = sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true") // Use first line of all files as header
.option("inferSchema", "true") // Automatically infer data types
.load("cars.csv")

Resources