My issue is that my CSV files come in ZIP .csv.zip format so I cannot just use Sparks .csv directly as I could with .csv.gzip | .csv.gz. Which means I need to decompress the file, read the contents (the files are quite big ~5gb) and write them as parquet files.
My approach is as such:
String paths = "s3a://...,s3a://...,...";
JavaRDD<Tuple2<String, PortableDataStream>> zipRDD = context.binaryFiles(paths, sparkContext.context.defaultParallelism()).toJavaRDD();
JavaRDD<Tuple2<String, List<Row>>> filenameRowsRDD = zipRDD.flatMap(new ConvertLinesToRows());
The first JavaRDD returns a pair of Filename, InputStream. This is then passed to class ConvertLinesToRows which calls ZipInputStream reads the contents of CSV files and for each line creates a new spark Row and finally returning tuple pair of Filename, List<Row> where the list contains all lines from CSV converted to Row.
I now want to save each read CSV as parquet file.
filenameRowsRDD.foreach(tuple -> {
SparkContext newContext = MySparkConfig.createNewContext();
SparkSession newSpark = SparkSession.builder()
.sparkContext(newContext)
.getOrCreate();
Dataset<Row> dataset = newSpark.createDataFrame(tuple._2, datasetSchema.value());
dataset.write().parquet("s3a://...");
});
I recreate the SparkSession in my executor so to be able to use SparkSession.write.
The idea is that this will all run in an executor (I'm hoping). However with this approach, reading file is not an issue problem comes when executor wants to write this output file. Throwing exception for: A master URL must be set in your configuration.
This seems like I'm doing something anti-spark way. It also does not work. I have also tried broadcasting my SparkSession however that throws a NPE inside SparkSession before trying to write.
What would be the correct way to approach my problem here ?
What would be the spark way of doing this.
All of the above code in in my main() method. Am I correct in assuming that first zipRDD is run on master node and than the second filenameRowsRDD runs on executor nodes as well as the .foreach.
I am using spark streaming and I want to save each batch of spark streaming on my local in Avro format. I have used saveAsNewAPIHadoopFile to save data in Avro format. This works well. But it overwrites the existing file. Next batch data will overwrite the old data. Is there any way to save Avro file in common directory? I tried by adding some properties of Hadoop job conf for adding a prefix in the file name. But not working any properties.
dstream.foreachRDD {
rdd.saveAsNewAPIHadoopFile(
path,
classOf[AvroKey[T]],
classOf[NullWritable],
classOf[AvroKeyOutputFormat[T]],
job.getConfiguration()
)
}
Try this -
You can make your process split into 2 steps :
Step-01 :- Write Avro file using saveAsNewAPIHadoopFile to <temp-path>
Step-02 :- Move file from <temp-path> to <actual-target-path>
This will definitely solve your problem for now. I will share my thoughts if I get to fulfill this scenario in one step instead of two.
Hope this is helpful.
response = "mi_or_chd_5"
outcome = sqlc.sql("""select eid,{response} as response
from outcomes
where {response} IS NOT NULL""".format(response=response))
outcome.write.parquet(response, mode="overwrite") # Success
print outcome.schema
StructType(List(StructField(eid,IntegerType,true),StructField(response,ShortType,true)))
But then:
outcome2 = sqlc.read.parquet(response) # fail
fails with:
AnalysisException: u'Unable to infer schema for Parquet. It must be specified manually.;'
in
/usr/local/lib/python2.7/dist-packages/pyspark-2.1.0+hadoop2.7-py2.7.egg/pyspark/sql/utils.pyc in deco(*a, **kw)
The documentation for parquet says the format is self describing, and the full schema was available when the parquet file was saved. What gives?
Using Spark 2.1.1. Also fails in 2.2.0.
Found this bug report, but was fixed in
2.0.1, 2.1.0.
UPDATE: This work when on connected with master="local", and fails when connected to master="mysparkcluster".
This error usually occurs when you try to read an empty directory as parquet.
Probably your outcome Dataframe is empty.
You could check if the DataFrame is empty with outcome.rdd.isEmpty() before writing it.
In my case, the error occured because I was trying to read a parquet file which started with an underscore (e.g. _lots_of_data.parquet). Not sure why this was an issue, but removing the leading underscore solved the problem.
See also:
Re: Spark-2.0.0 fails reading a parquet dataset generated by Spark-1.6.2
I'm using AWS Glue and I received this error while reading data from a data catalog table (location: s3 bucket).
After bit of analysis I realised that, this is due to file not available in file location(in my case s3 bucket path).
Glue was trying to apply data catalog table schema on a file which doesn't exist.
After copying file into s3 bucket file location, issue got resolved.
Hope this helps someone who encounters/encountered an error in AWS Glue.
This case occurs when you try to read a table that is empty. If the table had correctly inserted data, there should be no problem.
Besides that with parquet, the same thing happens with ORC.
I see there are already so many Answers. But the issue I faced was my Spark job was trying to read a file which is being overwritten by another Spark job that was previously started. It sounds bad, but I did that mistake.
Just to emphasize #Davos answer in a comment, you will encounter this exact exception error, if your file name has a dot . or an underscore _ at start of the filename
val df = spark.read.format("csv").option("delimiter", "|").option("header", "false")
.load("/Users/myuser/_HEADER_0")
org.apache.spark.sql.AnalysisException:
Unable to infer schema for CSV. It must be specified manually.;
Solution is to rename the file and try again (e.g. _HEADER rename to HEADER)
Happened to me for a parquet file that was in the process of being written to.
Just need to wait for it to be completely written.
In my case, the error occurred because the filename contained underscores. Rewriting / reading the file without underscores (hyphens were OK) solved the problem...
For me this happened when I thought loading the correct file path but instead pointed a incorrect folder
I ran into a similar problem with reading a csv
spark.read.csv("s3a://bucket/spark/csv_dir/.")
gave an error of:
org.apache.spark.sql.AnalysisException: Unable to infer schema for CSV. It must be specified manually.;
I found if I removed the trailing . and then it works. ie:
spark.read.csv("s3a://bucket/spark/csv_dir/")
I tested this for parquet adding a trailing . and you get an error of:
org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. It must be specified manually.;
I just encountered the same problem but none of the solutions here work for me. I try to merge the row groups of my parquet files on hdfs by first reading them and write it to another place using:
df = spark.read.parquet('somewhere')
df.write.parquet('somewhere else')
But later when I query it with
spark.sql('SELECT sth FROM parquet.`hdfs://host:port/parquetfolder/` WHERE .. ')
It shows the same problem.
I finally solve this by using pyarrow:
df = spark.read.parquet('somewhere')
pdf = df.toPandas()
adf = pa.Table.from_pandas(pdf) # import pyarrow as pa
fs = pa.hdfs.connect()
fw = fs.open(path, 'wb')
pq.write_table(adf, fw) # import pyarrow.parquet as pq
fw.close()
Check if .parquet files available at the response path. I am assuming, either files don't exist or it may be exist in some internal(partitioned) folders. If files are available under multiple hierarchy folders then append /* for each folder.
As in my case .parquet files were under 3 folders from base_path, so I have given path as base_path/*/*/*
As others mentioned, in my case this error appeared when I was reading S3 keys that did not exist.
A solution is filter-in keys that do exist:
import com.amazonaws.services.s3.AmazonS3URI
import org.apache.hadoop.fs.{FileSystem, Path}
import org.apache.spark.sql.SparkSession
import java.net.URI
def addEndpointToUrl(url: String, domain: String = "s3.amazonaws.com"): String = {
val uri = new URI(url)
val hostWithEndpoint = uri.getHost + "." + domain
new URI(uri.getScheme, uri.getUserInfo, hostWithEndpoint, uri.getPort, uri.getPath, uri.getQuery, uri.getFragment).toString
}
def createS3URI(url: String): AmazonS3URI = {
try {
// try to instantiate AmazonS3URI with url
new AmazonS3URI(url)
} catch {
case e: IllegalArgumentException if e.getMessage.
startsWith("Invalid S3 URI: hostname does not appear to be a valid S3 endpoint") => {
new AmazonS3URI(addEndpointToUrl(url))
}
}
}
def s3FileExists(spark: SparkSession, url: String): Boolean = {
val amazonS3Uri: AmazonS3URI = createS3URI(url)
val s3BucketUri = new URI(s"${amazonS3Uri.getURI().getScheme}://${amazonS3Uri.getBucket}")
FileSystem
.get(s3BucketUri, spark.sparkContext.hadoopConfiguration)
.exists(new Path(url))
}
and you can use it as:
val partitions = List(yesterday, today, tomorrow)
.map(f => somepath + "/date=" + f)
.filter(f => s3FileExists(spark, f))
val df = spark.read.parquet(partitions: _*)
For that solution I took some code out of spark-redshift project, here.
You are just loading a parquet file , Of course parquet had valid
schema. Otherwise it would not been saved as parquet. This error means
-
Either parquet file does not exist . (99.99% cases this is the issue. Spark error messages are often less obvious)
Somehow the parquet file got corrupted or Or It's not a parquet file at all
I ran into this issue because of folder in folder issue.
for example folderA.parquet was supposed to have partion.... but instead it have folderB.parquet which inside have partition.
Resolution,
transfer the file to parent folder and delete the subfolder.
you can read with /*
outcome2 = sqlc.read.parquet(f"{response}/*") # work for me
My problem was moving the files to another location and not changing the path in the files in the folder _spark_metadata.
To fix it do something like:
sed -i "s|/old/path|/new/path|g" ** .*
Seems this issue can be caused by a lot of reasons, I am providing another scenario:
By default the spark parquet source is using "partition inferring" which means it requires the file path to be partition in Key=Value pairs and the loads happens at the root.
To avoid this, if we assure all the leaf files have identical schema, then we can use
df = spark.read.format("parquet")\
.option("recursiveFileLookup", "true")
to disable the "partition inferring" manually. It basically load files one by one using the parquet's logical_type.
i'm trying to save DataFrame into CSV using the new spark 2.1 csv option
df.select(myColumns: _*).write
.mode(SaveMode.Overwrite)
.option("header", "true")
.option("codec", "org.apache.hadoop.io.compress.GzipCodec")
.csv(absolutePath)
everything works fine and i don't mind haivng the part-000XX prefix
but now seems like some UUID was added as a suffix
i.e
part-00032-10309cf5-a373-4233-8b28-9e10ed279d2b.csv.gz ==> part-00032.csv.gz
Anyone knows how i can remove this file ext and stay only with part-000XX convension
Thanks
You can remove the UUID by overriding the configuration option "spark.sql.sources.writeJobUUID":
https://github.com/apache/spark/commit/0818fdec3733ec5c0a9caa48a9c0f2cd25f84d13#diff-c69b9e667e93b7e4693812cc72abb65fR75
Unfortunately this solution will not fully mirror the old saveAsTextFile style (i.e. part-00000), but could make the output file name more sane such as part-00000-output.csv.gz where "output" is the value you pass to spark.sql.sources.writeJobUUID. The "-" is automatically appended
SPARK-8406 is the relevant Spark issue and here's the actual Pull Request: https://github.com/apache/spark/pull/6864
I am trying to solve a simple log parsing problem: given N log files
logfile.20160601.txt
logfile.29169692.txt
...
I would save a parquet file with the date key of the log.
In order to accomplish that I found this way in order to get the inputSplit path.
val data = sc.textFile("/logdirectory/*.*")
val logsWithFileName = data.mapPartitionsWithInputSplit { (inputSplit, iterator) =>
val file = inputSplit.asInstanceOf[FileSplit]
val logDateKey = getDatekeyFromPath(file)
iterator.map { tpl => ( logDateKey, tpl._2.toString) }
}
val logs = logsWithFileName.map(item => LogItem(item._1,item._2))
val df = logs.toDF
Now I try to save the dataframe
df.write.partitionBy("logdatekey", "hostname").save("C:/temp/results.parquet")
but I receive this message
Output directory file:/C:/temp/results.parquet already exists
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory file:/C:/temp/results.parquet already exists
at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:132)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1179)
Does anyone experimented this strange behavior? Could this be related to the use of input split?
Many thanks in adavance
Rob
Well you error message says allo. You are trying to write an output that already exist.
You just need to what are the available save operations :
Save operations can optionally take a SaveMode, that specifies how to handle existing data if present. It is important to realize that these save modes do not utilize any locking and are not atomic. Additionally, when performing a Overwrite, the data will be deleted before writing out the new data.
SaveMode.ErrorIfExists (behavior by default) when saving a DataFrame to a data source, if data already exists, an exception is expected to be thrown.
SaveMode.Append - when saving a DataFrame to a data source, if data/table already exists, contents of the DataFrame are expected to be appended to existing data.
SaveMode.Overwrite - means that when saving a DataFrame to a data source, if data/table already exists, existing data is expected to be overwritten by the contents of the DataFrame.
SaveMode.Ignore - means that when saving a DataFrame to a data source, if data already exists, the save operation is expected to not save the contents of the DataFrame and to not change the existing data. This is similar to a CREATE TABLE IF NOT EXISTS in SQL.
So if your case, if you want to overwrite your existing data, you should do the following :
import org.apache.spark.sql.SaveMode
df.write.partitionBy("logdatekey", "hostname").mode(SaveMode.Overwrite).save("C:/temp/results.parquet")