Spark SQL returns null for a column in HIVE table while HIVE query returns non null values - apache-spark

I have a hive table created on top of s3 DATA in parquet format and partitioned by one column named eventdate.
1) When using HIVE QUERY, it returns data for a column named "headertime" which is in the schema of BOTH the table and the file.
select headertime from dbName.test_bug where eventdate=20180510 limit 10
2) FROM a scala NOTEBOOK , when directly loading a file from a particular partition that also works,
val session = org.apache.spark.sql.SparkSession.builder
.appName("searchRequests")
.enableHiveSupport()
.getOrCreate;
val searchRequest = session.sqlContext.read.parquet("s3n://bucketName/module/search_request/eventDate=20180510")
searchRequest.createOrReplaceTempView("SearchRequest")
val exploreDF = session.sql("select headertime from SearchRequest where SearchRequestHeaderDate='2018-05-10' limit 100")
exploreDF.show(20)
this also displays the values for the column "headertime"
3) But, when using spark sql to query directly the HIVE table as below,
val exploreDF = session.sql("select headertime from tier3_vsreenivasan.test_bug where eventdate=20180510 limit 100")
exploreDF.show(20)
it keeps returning null always.
I opened the parquet file and see that the column headertime is present with values, but not sure why spark SQL is not able to read the values for that column.
it will be helpful if someone can point out from where the spark SQL gets the schema? I was expecting it to behave similar to the HIVE QUERY

Related

InsertInto(tablename) always saving Dataframe in default database in Hive

Hi I have 2 table in my hive in which from first table i m selecting data creating dataframe and saving that dataframe into another table in orc format.I have created both the tables in same database.
when I am saving this dataframe into 2nd table I'm getting table not found in database issue.and if i m not using any databasename then it always creating and saving my df in hive default database.can someone please guide me why its not taking userdefined database and always taking as default database?below is code which I m using,and also i m using HDP.
//creating hive session
val hive = com.hortonworks.spark.sql.hive.llap.HiveWarehouseBuilder.session(sparksession).build()
hive.setDatabase("dbname")
var a= "SELECT 'all columns' from dbname.tablename"
val a1=hive.executeQuery(a)
a1.write
.format("com.hortonworks.spark.sql.hive.llap.HiveWarehouseConnector")
.option("database", "dbname")
.option("table", "table_name")
.mode("Append")
.insertInto("dbname.table_name")
instead of insertInto(dbname.table_name) if I'm using insertInto(table_name) then its is saving dataframe in default database. But if I'm giving dbname.tablename then its showing table not found in database.
I also tried same using dbSession using.
val dbSession = HiveWarehouseSession.session(sparksession).build()
dbSession.setDatabase("dbname")
Note: My second table(target table where I'm writing data) is a partitioned and bucketed table.
// 2. partitionBy(...)
{ a1.write
.format("com.hortonworks.spark.sql.hive.llap.HiveWarehouseConnector")
.option("database", "dbname")
.option("table", "table_name")
.mode("Append")
.insertInto("dbname.table_name")
// My second table(target table where I'm writing data) is a partitioned and bucketed table. add .partitionBy(<list cols>)
}

Create index in Ignite table when save dataframe from pyspark

I save Spark dataframe to Apache Ignite table with this code:
df.write\
.format("ignite")\
.option("table","REPORT")\
.option("primaryKeyFields", ', '.join(map(str, df.schema.names[:-1])))\
.option("config",configFile)\
.option("compression", "gzip")\
.mode("overwrite")\
.save()
But, I cannot find how create index on field with this owerwrite-saving.
I need this, but on .save() operation:
CREATE INDEX REPORT_FIELD_IDX ON PUBLIC.REPORT (FIELD)
It's pretty simple to do using the syntax like next:
CREATE INDEX IF NOT EXISTS AGE_IDX ON "PUBLIC".Person (AGE)
In case if a new table wasn't created then IF NOT EXISTS will work and nothing will be done. Otherwise, the index will be created.
It can be run using any SQL tool that can be used with Ignite (webconsole, visor, sqlline, jdbc, odbc, etc) but I guess that you are going to do it from Spark job. So you can try to use IgniteSparkSession or IgniteRDD to run SQL over Ignite:
IgniteSparkSession igniteSession = IgniteSparkSession.builder()
.appName("Spark Ignite example")
.igniteConfig(configPath)
.getOrCreate();
igniteSession.sqlContext().sql("CREATE INDEX IF NOT EXISTS AGE_IDX ON \"PUBLIC\".Person (AGE)");
or
val cacheRdd = igniteContext.fromCache("partitioned")
val result = cacheRdd.sql(
"CREATE INDEX IF NOT EXISTS AGE_IDX ON \"PUBLIC\".Person (AGE)")
No, you can't do that when saving DataFrame with Spark. Creating a table and creating an index are 2 different operations.
Here are all the options for DataFrame saving into Ignite, and as you can see, there is no option for index creation.

spark Dataframe string to Hive varchar

I read data from Oracle via spark JDBC connection to a DataFrame. I have a column which is obviously StringType in dataframe.
Now I want to persist this in Hive, but as datatype Varchar(5). I know the string would be truncated but it is ok.
I tried using UDFs which didn't work since dataframe does not have varchar or char types. I also created a temporary view in Hive using:
val tv = df.createOrReplaceTempView("t_name")
val df = spark.sql("select cast(col_name as varchar(5)) from tv")
But then when i printSchema, i still see a string type.
How can I make I save it as a varchar column in Hive table ?
Try creating Hive table("dbName.tableName") with required schema (varchar(5) in this case) and insert into the table directly from Dataframe like below.
df.write.insertInto("dbName.tableName" ,overwrite = False)

Filter Partition Before Reading Hive table (Spark)

Currently I'm trying to filter a Hive table by the latest date_processed.
The table is partitioned by.
System
date_processed
Region
The only way I've managed to filter it, is by doing a join query:
query = "select * from contracts_table as a join (select (max(date_processed) as maximum from contract_table as b) on a.date_processed = b.maximum"
This way is really time consuming, as I have to do the same procedure for 25 tables.
Any one Knows a way to read directly the latest loaded partition of a table in Spark <1.6
This is the method I'm using to read.
public static DataFrame loadAndFilter (String query)
{
return SparkContextSingleton.getHiveContext().sql(+query);
}
Many thanks!
Dataframe with all table partitions can be received by:
val partitionsDF = hiveContext.sql("show partitions TABLE_NAME")
Values can be parsed, for get max value.

Reading hive orc table using spark

I have a partitioned table. Partitons from 2017-06-20 and up.
My query.
import org.apache.spark.sql.hive.orc._
import org.apache.spark.sql._
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
val test_enc_orc = hiveContext.sql("select * from db.tbl where time_key = '2017-06-21' limit 1")
Every time I run it, spark looks for this partition 2017-06-20
INFO OrcFileOperator: ORC file hdfs://nameservice1/apps/hive/warehouse/db.db/tbl/time_key=2017-06-20/000016_0 has empty schema, it probably contains no rows. Trying to read another ORC file to figure out the schema.
and searches for all files for date 2017-06-20. It holds empty ORC files. But partition 2017-06-21 has files with data. Why doesn't spark search that date or any other?
EDIT
Created test table
drop table arstel.evkuzmin_test_it;
create table arstel.evkuzmin_test_it(name string)
partitioned by(ban int)
stored as orc;
insert into arstel.evkuzmin_test_it partition(ban) values
("bob", 1)
, ("marty", 1)
, ("monty", 2)
, ("naruto", 2)
, ("death", 4);
Seems like the problem is exactly because of empty files. In this case there are none, so everything works. Is there a way to make spark ignore them?

Resources