Spark DataFrame ORC Hive table reading issue - apache-spark
I am trying to read a Hive table in Spark. Below is the Hive Table format:
# Storage Information
SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
InputFormat: org.apache.hadoop.hive.ql.io.orc.OrcInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
Compressed: No
Num Buckets: -1
Bucket Columns: []
Sort Columns: []
Storage Desc Params:
field.delim \u0001
serialization.format \u0001
When I am trying to read it using the Spark SQL with the below command:
val c = hiveContext.sql("""select
a
from c_db.c cs
where dt >= '2016-05-12' """)
c. show
I am getting the below warning:-
18/07/02 18:02:02 WARN ReaderImpl: Cannot find field for: a in _col0,
_col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12, _col13, _col14, _col15, _col16, _col17, _col18, _col19, _col20, _col21, _col22, _col23, _col24, _col25, _col26, _col27, _col28, _col29, _col30, _col31, _col32, _col33, _col34, _col35, _col36, _col37, _col38, _col39, _col40, _col41, _col42, _col43, _col44, _col45, _col46, _col47, _col48, _col49, _col50, _col51, _col52, _col53, _col54, _col55, _col56, _col57, _col58, _col59, _col60, _col61, _col62, _col63, _col64, _col65, _col66, _col67,
The read starts but it is very slow and getting network time out.
When i am trying to read the Hive table directory directly i am getting the below error.
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
hiveContext.setConf("spark.sql.orc.filterPushdown", "true")
val c = hiveContext.read.format("orc").load("/a/warehouse/c_db.db/c")
c.select("a").show()
org.apache.spark.sql.AnalysisException: cannot resolve 'a' given input
columns: [_col18, _col3, _col8, _col66, _col45, _col42, _col31,
_col17, _col52, _col58, _col50, _col26, _col63, _col12, _col27, _col23, _col6, _col28, _col54, _col48, _col33, _col56, _col22, _col35, _col44, _col67, _col15, _col32, _col9, _col11, _col41, _col20, _col2, _col25, _col24, _col64, _col40, _col34, _col61, _col49, _col14, _col13, _col19, _col43, _col65, _col29, _col10, _col7, _col21, _col39, _col46, _col4, _col5, _col62, _col0, _col30, _col47, trans_dt, _col57, _col16, _col36, _col38, _col59, _col1, _col37, _col55, _col51, _col60, _col53];
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
I can convert the Hive table to TextInputFormat but that should be my last option as i would like to get the benefit of OrcInputFormat to compress the table size.
Really appreciate your suggestion.
I found workaround with reading table such way:
val schema = spark.table("db.name").schema
spark.read.schema(schema).orc("/path/to/table")
The issue occurs generally with large tables, as it fails to read to max field length. I added meta-store read as true (set spark.sql.hive.convertMetastoreOrc=true;) and it worked for me.
I think the table doesnt have named columns or if it has, Spark isnt able to read the names probably.
You can use the default column names that Spark has given as mentioned in the Error. Or also set column names in the Spark code.
Use printSchema and toDF method to rename the columns. But yes, you will need the mappings. This might require selecting and showing columns individually.
Setting (set spark.sql.hive.convertMetastoreOrc=true;) conf is working. But its trying to modify metadata of hive table. Can you please explain me, What is going to modify and does it effect the table.
Thanks
Related
Delta Live Tables and ingesting AVRO
So, im trying to load avro files in to dlt and create pipelines and so fourth. As a simple data frame in Databbricks, i can read and unpack to avro files, using functions json / rdd.map /lamba function. Where i can create a temp view then do a sql query and then select the fields i want. --example command in_path = '/mnt/file_location/*/*/*/*/*.avro' avroDf = spark.read.format("com.databricks.spark.avro").load(in_path) jsonRdd = avroDf.select(avroDf.Body.cast("string")).rdd.map(lambda x: x[0]) data = spark.read.json(jsonRdd) data.createOrReplaceTempView("eventhub") --selecting the data sql_query1 = sqlContext.sql(""" select distinct data.field.test1 as col1 ,data.field.test2 as col2 ,data.field.fieldgrp.city as city from eventhub """) However, i am trying to replicate the process , but use delta live tables and pipelines. I have used autoloader to load the files into a table, and kept the format as is. So bronze is just avro in its rawest form. I then planned to create a view that listed the unpack avro file. Much like I did above with "eventhub". Whereby it will then allow me to create queries. The trouble is, I cant get it to work in dlt. I fail at the 2nd step, after i have imported the file into a bronze layer. It just does not seem to apply the functions to make the data readable/selectable. This is the sort of code i have been trying. However, it does not seem to pick up the schema, so it is as if the functions are not working. so when i try and select a column, it does not recognise it. --unpacked data #dlt.view(name=f"eventdata_v") def eventdata_v(): avroDf = spark.read.format("delta").table("live.bronze_file_list") jsonRdd = avroDf.select(avroDf.Body.cast("string")).rdd.map(lambda x: x[0]) data = spark.read.json(jsonRdd) return data --trying to query the data but it does not recognise field names, even when i select "data" only #dlt.view(name=f"eventdata2_v") def eventdata2_v(): df = ( dlt.read("eventdata_v") .select("data.field.test1 ") ) return df I have been working on this for weeks, trying to use different approach's but still no luck. Any help will be so appreciated. Thankyou
Merging duplicate columns in seq json hdfs files in spark
I am reading a seq json file from HDFS using spark like this : val data = spark.read.json(spark.sparkContext.sequenceFile[String, String]("/prod/data/class1/20190114/2019011413/class2/part-*").map{ case (x,y) => (y.toString)}) data.registerTempTable("data") val filteredData = data.filter("sourceInfo='Web'") val explodedData = filteredData.withColumn("A", explode(filteredData("payload.adCsm.vfrd"))) val explodedDataDbg = explodedData.withColumn("B", explode(filteredData("payload.adCsm.dbg"))).drop("payload") On which I am getting this error: org.apache.spark.sql.AnalysisException: Ambiguous reference to fields StructField(adCsm,ArrayType(StructType(StructField(atfComp,StringType,true), StructField(csmTot,StringType,true), StructField(dbc,ArrayType(LongType,true),true), StructField(dbcx,LongType,true), StructField(dbg,StringType,true), StructField(dbv,LongType,true), StructField(fv,LongType,true), StructField(hdr,LongType,true), StructField(hidden,StructType(StructField(duration,LongType,true), StructField(stime,StringType,true)),true), StructField(hvrx,DoubleType,true), StructField(hvry,DoubleType,true), StructField(inf,StringType,true), StructField(isP,LongType,true), StructField(ltav,StringType,true), StructField(ltdb,StringType,true), StructField(ltdm,StringType,true), StructField(lteu,StringType,true), StructField(ltfm,StringType,true), StructField(ltfs,StringType,true), StructField(lths,StringType,true), StructField(ltpm,StringType,true), StructField(ltpq,StringType,true), StructField(ltts,StringType,true), StructField(ltut,StringType,true), StructField(ltvd,StringType,true), StructField(ltvv,StringType,true), StructField(msg,StringType,true), StructField(nl,LongType,true), StructField(prerender,StructType(StructField(duration,LongType,true), StructField(stime,LongType,true)),true), StructField(pt,StringType,true), StructField(src,StringType,true), StructField(states,StringType,true), StructField(tdr,StringType,true), StructField(tld,StringType,true), StructField(trusted,BooleanType,true), StructField(tsc,LongType,true), StructField(tsd,DoubleType,true), StructField(tsz,DoubleType,true), StructField(type,StringType,true), StructField(unloaded,StructType(StructField(duration,LongType,true), StructField(stime,LongType,true)),true), StructField(vdr,StringType,true), StructField(vfrd,LongType,true), StructField(visible,StructType(StructField(duration,LongType,true), StructField(stime,StringType,true)),true), StructField(xpath,StringType,true)),true),true), StructField(adcsm,ArrayType(StructType(StructField(tdr,DoubleType,true), StructField(vdr,DoubleType,true)),true),true); Not sure how, but ONLY SOMETIMES there are two structs with the same name "adCsm" inside "payload". Since I am interested in fields present in one of them, I need to deal with this ambiguity. I know one way is to check for the field A and B and drop the column if the fields are absent and hence choose the other adCsm. Was wondering if there is any better way to handle this? If I can probably just merge the duplicate columns (with different data) instead of this explicit filtering? Not sure how duplicate structs are even present in a seq "json" file TIA!
I think, the ambiguity happened due to case sensitivity issue in spark dataframe column name. In the last part of the schema i see StructField(adcsm, ArrayType(StructType( StructField(tdr,DoubleType,true), StructField(vdr,DoubleType,true)),true),true) So there is two same name structFields (adScm and adscm) inside plain StructType. First enable case sensitivity of spark sql by sqlContext.sql("set spark.sql.caseSensitive=true") then it'll differentiate the two fields. Here is details to solve case sensitive issue solve case sensitivity issue . Hopefully it'll help you.
Spark: Read multiple AVRO files with different schema in parallel
I have many (relatively small) AVRO files with different schema, each in one location like this: Object Name: A /path/to/A A_1.avro A_2.avro ... A_N.avro Object Name: B /path/to/B B_1.avro B_2.avro ... B_N.avro Object Name: C /path/to/C C_1.avro C_2.avro ... C_N.avro ... and my goal is to read them in parallel via Spark and store each row as a blob in one column of the output. As a result my output data will have a consistent schema, something like the following columns: ID, objectName, RecordDate, Data Where the 'Data' field contains a string JSON of the original record. My initial thought was to put the spark read statements in a loop, create the fields shown above for each dataframe, and then apply a union operation to get my final dataframe, like this: all_df = [] for obj_name in all_object_names: file_path = get_file_path(object_name) df = spark.read.format(DATABRIKS_FORMAT).load(file_path) all_df.append(df) df_o = all_df.drop() for df in all_df: df_o = df_o.union(df) # write df_o to the output However I'm not sure if the read operations are going to be parallelized. I also came across the sc.textFile() function to read all the AVRO files in one shot as string, but couldn't make it work. So I have 2 questions: Would the multiple read statements in a loop be parallelized by Spark? Or is there a more efficient way to achieve this? Can sc.textFile() be used to read the AVRO files as a string JSON in one column? I'd appreciate your thoughts and suggestions.
HIVE Parquet error
I'm trying to insert the content of a dataframe to a partitioned parquet-formatted hive table using df.write.mode(SaveMode.Append).insertInto(myTable) with hive.exec.dynamic.partition = 'true' and hive.exec.dynamic.partition.mode = 'nonstrict'. I keep getting an parquet.io.ParquetEncodingException saying that empty fields are illegal, the field should be ommited completely instead. The schema includes arrays (array<<struct<<int, string>>>>), and the df do contain some empty entries for these fields. However, when I insert the df content into a non-partitioned table, I do not get an error. How to this fix this issue. I have attached error
spark save taking lot of time
I've 2 dataframes and I want to find the records with all columns equal except 2 (surrogate_key,current) And then I want to save those records with new surrogate_key value. Following is my code : val seq = csvDataFrame.columns.toSeq var exceptDF = csvDataFrame.except(csvDataFrame.as('a).join(table.as('b),seq).drop("surrogate_key","current")) exceptDF.show() exceptDF = exceptDF.withColumn("surrogate_key", makeSurrogate(csvDataFrame("name"), lit("ecc"))) exceptDF = exceptDF.withColumn("current", lit("Y")) exceptDF.show() exceptDF.write.option("driver","org.postgresql.Driver").mode(SaveMode.Append).jdbc(postgreSQLProp.getProperty("url"), tableName, postgreSQLProp) This code gives correct results, but get stuck while writing those results to postgre. Not sure what's the issue. Also is there any better approach for this?? Regards, Sorabh
By Default spark-sql creates 200 partitions, which means when you are trying to save the datafrmae it will be saved in 200 parquet files. you can reduce the number of partitions for Dataframe using below techniques. At application level. Set the parameter "spark.sql.shuffle.partitions" as follows : sqlContext.setConf("spark.sql.shuffle.partitions", "10") Reduce the number of partition for a particular DataFrame as follows : df.coalesce(10).write.save(...)
Using the var for dataframe are not suggested, You should always use val and create a new Dataframe after performing some transformation in dataframe. Please remove all the var and replace with val. Hope this helps!