Query Redshift from Hive not pushing down predicates - apache-spark

I launched an AWS EMR cluster with EMR 5.28.0, Spark and Hive.
I was used to Spark SQL with spark-redshift connector that made me able to read/write in Redshift creating external tables like that:
CREATE TABLE `test`.`redshift_table` (`id` INT, `object_id` STRING)
USING com.databricks.spark.redshift
OPTIONS (
`tempdir` 's3a://my_bucket/table/',
`url` 'jdbc:redshift://xxxxxx:5439/database?user=user&password=password',
`forward_spark_s3_credentials` 'true',
`serialization.format` '1',
`dbtable` 'my.table'
)
Now I am looking for the equivalent thing in Hive:
at least to be able to read a Redshift table from Hive (so I can join Redshift data with other tables from the datalake)
and if possible to write to Redshift from Hive too (so I can create ETLs in the data lake writing some results to Redshift)
I've been looking around but I'm not sure what would be the format of the CREATE TABLE and if I need to install something else on the cluster before.
Thanks
Update:
I have been able to do it with EMR 5.28.0 now using those jars:
https://mvnrepository.com/artifact/org.apache.hive/hive-jdbc-handler/3.1.2
https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/1.2.37.1061/RedshiftJDBC42-no-awssdk-1.2.37.1061.jar
and then creating the table in Hive with:
CREATE EXTERNAL TABLE test.table(
id INTEGER,
name STRING
)
STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler'
TBLPROPERTIES (
"hive.sql.database.type" = "POSTGRES",
"hive.sql.jdbc.driver" = "com.amazon.redshift.jdbc.Driver",
"hive.sql.jdbc.url" = "jdbc:redshift://host:5439/database",
"hive.sql.dbcp.username" = "user",
"hive.sql.dbcp.password" = "password",
"hive.sql.table" = "schema.name",
"hive.sql.dbcp.maxActive" = "1"
);
The issue I have now is that it does not push down predicates to Redshift. For example "SELECT * FROM test.table where id = 1;" first executes a Redshift query reading the whole table, any idea how to change this behavior please?
I checked the Hive settings and I have:
hive.optimize.ppd=true
hive.optimize.ppd.storage=true

Related

Presto hive connector reads .zst file

for the following query, both Hive and Spark-SQL work fine,
but the result returned by Presto (hive connector ) has wrong encoding/decoding.
Wonders how should I set the hive connector, or presto doesn't support reading zstd?
hive table:
CREATE TABLE mydb.testtb (
mid varchar COMMENT 'mid',
day varchar
)
WITH (
external_location = 'hdfs://userx/mydb/testtb/
format = 'TEXTFILE',
partitioned_by = ARRAY['day']
)
files pointing to the HDFS are written using zstd compression like
.../testtb/day=20221113/part-00020-63c1xxxxx000.zst
SQL
select * from mydb.testtb where day=20221113 limit 5
result of presto

InsertInto(tablename) always saving Dataframe in default database in Hive

Hi I have 2 table in my hive in which from first table i m selecting data creating dataframe and saving that dataframe into another table in orc format.I have created both the tables in same database.
when I am saving this dataframe into 2nd table I'm getting table not found in database issue.and if i m not using any databasename then it always creating and saving my df in hive default database.can someone please guide me why its not taking userdefined database and always taking as default database?below is code which I m using,and also i m using HDP.
//creating hive session
val hive = com.hortonworks.spark.sql.hive.llap.HiveWarehouseBuilder.session(sparksession).build()
hive.setDatabase("dbname")
var a= "SELECT 'all columns' from dbname.tablename"
val a1=hive.executeQuery(a)
a1.write
.format("com.hortonworks.spark.sql.hive.llap.HiveWarehouseConnector")
.option("database", "dbname")
.option("table", "table_name")
.mode("Append")
.insertInto("dbname.table_name")
instead of insertInto(dbname.table_name) if I'm using insertInto(table_name) then its is saving dataframe in default database. But if I'm giving dbname.tablename then its showing table not found in database.
I also tried same using dbSession using.
val dbSession = HiveWarehouseSession.session(sparksession).build()
dbSession.setDatabase("dbname")
Note: My second table(target table where I'm writing data) is a partitioned and bucketed table.
// 2. partitionBy(...)
{ a1.write
.format("com.hortonworks.spark.sql.hive.llap.HiveWarehouseConnector")
.option("database", "dbname")
.option("table", "table_name")
.mode("Append")
.insertInto("dbname.table_name")
// My second table(target table where I'm writing data) is a partitioned and bucketed table. add .partitionBy(<list cols>)
}

Create index in Ignite table when save dataframe from pyspark

I save Spark dataframe to Apache Ignite table with this code:
df.write\
.format("ignite")\
.option("table","REPORT")\
.option("primaryKeyFields", ', '.join(map(str, df.schema.names[:-1])))\
.option("config",configFile)\
.option("compression", "gzip")\
.mode("overwrite")\
.save()
But, I cannot find how create index on field with this owerwrite-saving.
I need this, but on .save() operation:
CREATE INDEX REPORT_FIELD_IDX ON PUBLIC.REPORT (FIELD)
It's pretty simple to do using the syntax like next:
CREATE INDEX IF NOT EXISTS AGE_IDX ON "PUBLIC".Person (AGE)
In case if a new table wasn't created then IF NOT EXISTS will work and nothing will be done. Otherwise, the index will be created.
It can be run using any SQL tool that can be used with Ignite (webconsole, visor, sqlline, jdbc, odbc, etc) but I guess that you are going to do it from Spark job. So you can try to use IgniteSparkSession or IgniteRDD to run SQL over Ignite:
IgniteSparkSession igniteSession = IgniteSparkSession.builder()
.appName("Spark Ignite example")
.igniteConfig(configPath)
.getOrCreate();
igniteSession.sqlContext().sql("CREATE INDEX IF NOT EXISTS AGE_IDX ON \"PUBLIC\".Person (AGE)");
or
val cacheRdd = igniteContext.fromCache("partitioned")
val result = cacheRdd.sql(
"CREATE INDEX IF NOT EXISTS AGE_IDX ON \"PUBLIC\".Person (AGE)")
No, you can't do that when saving DataFrame with Spark. Creating a table and creating an index are 2 different operations.
Here are all the options for DataFrame saving into Ignite, and as you can see, there is no option for index creation.

Spark SQL returns null for a column in HIVE table while HIVE query returns non null values

I have a hive table created on top of s3 DATA in parquet format and partitioned by one column named eventdate.
1) When using HIVE QUERY, it returns data for a column named "headertime" which is in the schema of BOTH the table and the file.
select headertime from dbName.test_bug where eventdate=20180510 limit 10
2) FROM a scala NOTEBOOK , when directly loading a file from a particular partition that also works,
val session = org.apache.spark.sql.SparkSession.builder
.appName("searchRequests")
.enableHiveSupport()
.getOrCreate;
val searchRequest = session.sqlContext.read.parquet("s3n://bucketName/module/search_request/eventDate=20180510")
searchRequest.createOrReplaceTempView("SearchRequest")
val exploreDF = session.sql("select headertime from SearchRequest where SearchRequestHeaderDate='2018-05-10' limit 100")
exploreDF.show(20)
this also displays the values for the column "headertime"
3) But, when using spark sql to query directly the HIVE table as below,
val exploreDF = session.sql("select headertime from tier3_vsreenivasan.test_bug where eventdate=20180510 limit 100")
exploreDF.show(20)
it keeps returning null always.
I opened the parquet file and see that the column headertime is present with values, but not sure why spark SQL is not able to read the values for that column.
it will be helpful if someone can point out from where the spark SQL gets the schema? I was expecting it to behave similar to the HIVE QUERY

How to stop load the whole table in spark?

The thing is, I have read right to one table,which is partition by year month and day.But I don't have right read the data from 2016/04/24.
when I execute in Hive command:
hive>select * from table where year="2016" and month="06" and day="01";
I CAN READ OTHER DAYS' DATA EXCEPT 2016/04/24
But,when I read in spark
sqlContext.sql.sql(select * from table where year="2016" and month="06" and day="01")
exceptition is throwable That I dont have the right to hdfs/.../2016/04/24
THIS SHOW SPARK SQL LOAD THE WHOLE TABLE ONCE AND THEN FILTER?
HOW CAN I AVOID LOAD THE WHOLE TABLE?
You can use JdbcRDDs directly. With it you can bypass spark sql engine therefore your queries will be directly sent to hive.
To use JdbcRDD you need to create hive driver and register it first (of course it is not registered already).
val driver = "org.apache.hive.jdbc.HiveDriver"
Class.forName(driver)
Then you can create a JdbcRDD;
val connUrl = "jdbc:hive2://..."
val query = """select * from table where year="2016" and month="06" and day="01" and ? = ?"""
val lowerBound = 0
val upperBound = 0
val numOfPartitions = 1
new JdbcRDD(
sc,
() => DriverManager.getConnection(connUrl),
query,
lowerBound,
upperBound,
numOfPartitions,
(r: ResultSet) => (r.getString(1) /** get data here or with a function**/)
)
JdbcRDD query must have two ? in order to create partition your data. So you should write a better query than me. This just creates one partition to demonstrate how it works.
However, before doing this I recommend you to check HiveContext. This supports HiveQL as well. Check this.

Resources